00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1992 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3258 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.030 Fetching changes from the remote Git repository 00:00:00.031 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.048 Using shallow fetch with depth 1 00:00:00.048 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.048 > git --version # timeout=10 00:00:00.068 > git --version # 'git version 2.39.2' 00:00:00.068 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.107 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.107 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.311 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.326 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.340 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:02.340 > git config core.sparsecheckout # timeout=10 00:00:02.352 > git read-tree -mu HEAD # timeout=10 00:00:02.370 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:02.389 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:02.390 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:02.477 [Pipeline] Start of Pipeline 00:00:02.493 [Pipeline] library 00:00:02.495 Loading library shm_lib@master 00:00:02.496 Library shm_lib@master is cached. Copying from home. 00:00:02.524 [Pipeline] node 00:00:02.536 Running on WFP39 in /var/jenkins/workspace/crypto-phy-autotest 00:00:02.539 [Pipeline] { 00:00:02.553 [Pipeline] catchError 00:00:02.555 [Pipeline] { 00:00:02.574 [Pipeline] wrap 00:00:02.584 [Pipeline] { 00:00:02.595 [Pipeline] stage 00:00:02.597 [Pipeline] { (Prologue) 00:00:02.825 [Pipeline] sh 00:00:03.108 + logger -p user.info -t JENKINS-CI 00:00:03.128 [Pipeline] echo 00:00:03.130 Node: WFP39 00:00:03.138 [Pipeline] sh 00:00:03.433 [Pipeline] setCustomBuildProperty 00:00:03.444 [Pipeline] echo 00:00:03.445 Cleanup processes 00:00:03.451 [Pipeline] sh 00:00:03.730 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.730 1681693 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.745 [Pipeline] sh 00:00:04.026 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.026 ++ grep -v 'sudo pgrep' 00:00:04.026 ++ awk '{print $1}' 00:00:04.026 + sudo kill -9 00:00:04.026 + true 00:00:04.040 [Pipeline] cleanWs 00:00:04.049 [WS-CLEANUP] Deleting project workspace... 00:00:04.049 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.055 [WS-CLEANUP] done 00:00:04.060 [Pipeline] setCustomBuildProperty 00:00:04.072 [Pipeline] sh 00:00:04.350 + sudo git config --global --replace-all safe.directory '*' 00:00:04.450 [Pipeline] httpRequest 00:00:04.470 [Pipeline] echo 00:00:04.471 Sorcerer 10.211.164.101 is alive 00:00:04.477 [Pipeline] httpRequest 00:00:04.507 HttpMethod: GET 00:00:04.507 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.508 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.508 Response Code: HTTP/1.1 200 OK 00:00:04.509 Success: Status code 200 is in the accepted range: 200,404 00:00:04.509 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.692 [Pipeline] sh 00:00:04.974 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.990 [Pipeline] httpRequest 00:00:05.009 [Pipeline] echo 00:00:05.011 Sorcerer 10.211.164.101 is alive 00:00:05.019 [Pipeline] httpRequest 00:00:05.023 HttpMethod: GET 00:00:05.024 URL: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:05.025 Sending request to url: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:05.025 Response Code: HTTP/1.1 200 OK 00:00:05.026 Success: Status code 200 is in the accepted range: 200,404 00:00:05.026 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:11.123 [Pipeline] sh 00:00:11.407 + tar --no-same-owner -xf spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:19.542 [Pipeline] sh 00:00:19.821 + git -C spdk log --oneline -n5 00:00:19.822 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:19.822 6c7c1f57e accel: add sequence outstanding stat 00:00:19.822 3bc8e6a26 accel: add utility to put task 00:00:19.822 2dba73997 accel: move get task utility 00:00:19.822 e45c8090e accel: improve accel sequence obj release 00:00:19.840 [Pipeline] withCredentials 00:00:19.850 > git --version # timeout=10 00:00:19.863 > git --version # 'git version 2.39.2' 00:00:19.880 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:19.882 [Pipeline] { 00:00:19.892 [Pipeline] retry 00:00:19.894 [Pipeline] { 00:00:19.913 [Pipeline] sh 00:00:20.199 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:20.210 [Pipeline] } 00:00:20.232 [Pipeline] // retry 00:00:20.237 [Pipeline] } 00:00:20.256 [Pipeline] // withCredentials 00:00:20.264 [Pipeline] httpRequest 00:00:20.280 [Pipeline] echo 00:00:20.282 Sorcerer 10.211.164.101 is alive 00:00:20.291 [Pipeline] httpRequest 00:00:20.295 HttpMethod: GET 00:00:20.296 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:20.296 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:20.298 Response Code: HTTP/1.1 200 OK 00:00:20.298 Success: Status code 200 is in the accepted range: 200,404 00:00:20.299 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:23.163 [Pipeline] sh 00:00:23.449 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:25.368 [Pipeline] sh 00:00:25.651 + git -C dpdk log --oneline -n5 00:00:25.651 caf0f5d395 version: 22.11.4 00:00:25.651 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:25.651 dc9c799c7d vhost: fix missing spinlock unlock 00:00:25.651 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:25.651 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:25.663 [Pipeline] } 00:00:25.683 [Pipeline] // stage 00:00:25.694 [Pipeline] stage 00:00:25.697 [Pipeline] { (Prepare) 00:00:25.722 [Pipeline] writeFile 00:00:25.746 [Pipeline] sh 00:00:26.030 + logger -p user.info -t JENKINS-CI 00:00:26.045 [Pipeline] sh 00:00:26.334 + logger -p user.info -t JENKINS-CI 00:00:26.348 [Pipeline] sh 00:00:26.631 + cat autorun-spdk.conf 00:00:26.631 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.631 SPDK_TEST_BLOCKDEV=1 00:00:26.631 SPDK_TEST_ISAL=1 00:00:26.631 SPDK_TEST_CRYPTO=1 00:00:26.631 SPDK_TEST_REDUCE=1 00:00:26.631 SPDK_TEST_VBDEV_COMPRESS=1 00:00:26.631 SPDK_RUN_UBSAN=1 00:00:26.631 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:26.631 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:00:26.639 RUN_NIGHTLY=1 00:00:26.645 [Pipeline] readFile 00:00:26.680 [Pipeline] withEnv 00:00:26.683 [Pipeline] { 00:00:26.699 [Pipeline] sh 00:00:26.984 + set -ex 00:00:26.984 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:26.984 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:26.984 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.984 ++ SPDK_TEST_BLOCKDEV=1 00:00:26.984 ++ SPDK_TEST_ISAL=1 00:00:26.984 ++ SPDK_TEST_CRYPTO=1 00:00:26.984 ++ SPDK_TEST_REDUCE=1 00:00:26.984 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:26.984 ++ SPDK_RUN_UBSAN=1 00:00:26.984 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:26.984 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:00:26.984 ++ RUN_NIGHTLY=1 00:00:26.984 + case $SPDK_TEST_NVMF_NICS in 00:00:26.984 + DRIVERS= 00:00:26.984 + [[ -n '' ]] 00:00:26.984 + exit 0 00:00:26.994 [Pipeline] } 00:00:27.015 [Pipeline] // withEnv 00:00:27.021 [Pipeline] } 00:00:27.045 [Pipeline] // stage 00:00:27.057 [Pipeline] catchError 00:00:27.060 [Pipeline] { 00:00:27.077 [Pipeline] timeout 00:00:27.078 Timeout set to expire in 40 min 00:00:27.080 [Pipeline] { 00:00:27.099 [Pipeline] stage 00:00:27.102 [Pipeline] { (Tests) 00:00:27.184 [Pipeline] sh 00:00:27.469 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:27.469 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:27.469 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:27.469 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:27.469 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:27.469 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:27.469 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:27.469 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:27.469 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:27.469 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:27.469 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:27.469 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:27.469 + source /etc/os-release 00:00:27.469 ++ NAME='Fedora Linux' 00:00:27.469 ++ VERSION='38 (Cloud Edition)' 00:00:27.469 ++ ID=fedora 00:00:27.469 ++ VERSION_ID=38 00:00:27.469 ++ VERSION_CODENAME= 00:00:27.469 ++ PLATFORM_ID=platform:f38 00:00:27.469 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:27.469 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:27.469 ++ LOGO=fedora-logo-icon 00:00:27.469 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:27.469 ++ HOME_URL=https://fedoraproject.org/ 00:00:27.469 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:27.469 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:27.469 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:27.469 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:27.469 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:27.469 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:27.469 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:27.469 ++ SUPPORT_END=2024-05-14 00:00:27.469 ++ VARIANT='Cloud Edition' 00:00:27.469 ++ VARIANT_ID=cloud 00:00:27.469 + uname -a 00:00:27.469 Linux spdk-wfp-39 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:00:27.469 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:31.666 Hugepages 00:00:31.666 node hugesize free / total 00:00:31.666 node0 1048576kB 0 / 0 00:00:31.666 node0 2048kB 0 / 0 00:00:31.666 node1 1048576kB 0 / 0 00:00:31.666 node1 2048kB 0 / 0 00:00:31.666 00:00:31.666 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:31.666 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:31.666 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:31.666 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:31.666 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:31.666 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:31.666 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:31.666 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:31.666 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:31.666 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:31.666 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:31.666 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:31.666 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:31.666 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:31.666 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:31.666 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:31.666 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:31.666 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:31.666 + rm -f /tmp/spdk-ld-path 00:00:31.666 + source autorun-spdk.conf 00:00:31.666 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.666 ++ SPDK_TEST_BLOCKDEV=1 00:00:31.666 ++ SPDK_TEST_ISAL=1 00:00:31.666 ++ SPDK_TEST_CRYPTO=1 00:00:31.666 ++ SPDK_TEST_REDUCE=1 00:00:31.666 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:31.666 ++ SPDK_RUN_UBSAN=1 00:00:31.666 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:31.666 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:00:31.666 ++ RUN_NIGHTLY=1 00:00:31.666 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:31.666 + [[ -n '' ]] 00:00:31.666 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:31.666 + for M in /var/spdk/build-*-manifest.txt 00:00:31.666 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:31.666 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:31.666 + for M in /var/spdk/build-*-manifest.txt 00:00:31.666 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:31.666 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:31.666 ++ uname 00:00:31.666 + [[ Linux == \L\i\n\u\x ]] 00:00:31.666 + sudo dmesg -T 00:00:31.666 + sudo dmesg --clear 00:00:31.666 + dmesg_pid=1683283 00:00:31.666 + [[ Fedora Linux == FreeBSD ]] 00:00:31.666 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:31.666 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:31.666 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:31.666 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:31.666 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:31.666 + sudo dmesg -Tw 00:00:31.666 + [[ -x /usr/src/fio-static/fio ]] 00:00:31.666 + export FIO_BIN=/usr/src/fio-static/fio 00:00:31.666 + FIO_BIN=/usr/src/fio-static/fio 00:00:31.666 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:31.666 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:31.666 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:31.666 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:31.666 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:31.666 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:31.666 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:31.666 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:31.666 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:31.666 Test configuration: 00:00:31.666 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.666 SPDK_TEST_BLOCKDEV=1 00:00:31.666 SPDK_TEST_ISAL=1 00:00:31.666 SPDK_TEST_CRYPTO=1 00:00:31.666 SPDK_TEST_REDUCE=1 00:00:31.666 SPDK_TEST_VBDEV_COMPRESS=1 00:00:31.666 SPDK_RUN_UBSAN=1 00:00:31.666 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:31.666 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:00:31.666 RUN_NIGHTLY=1 02:06:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:31.666 02:06:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:31.666 02:06:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:31.666 02:06:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:31.666 02:06:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.666 02:06:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.666 02:06:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.666 02:06:21 -- paths/export.sh@5 -- $ export PATH 00:00:31.666 02:06:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.666 02:06:21 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:31.666 02:06:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:31.666 02:06:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720656381.XXXXXX 00:00:31.667 02:06:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720656381.0JmIWF 00:00:31.667 02:06:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:31.667 02:06:21 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:00:31.667 02:06:21 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:00:31.667 02:06:21 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/crypto-phy-autotest/dpdk' 00:00:31.667 02:06:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:31.667 02:06:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/dpdk --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:31.667 02:06:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:31.667 02:06:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:31.667 02:06:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.667 02:06:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build' 00:00:31.667 02:06:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:31.667 02:06:21 -- pm/common@17 -- $ local monitor 00:00:31.667 02:06:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.667 02:06:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.667 02:06:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.667 02:06:21 -- pm/common@21 -- $ date +%s 00:00:31.667 02:06:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.667 02:06:21 -- pm/common@21 -- $ date +%s 00:00:31.667 02:06:21 -- pm/common@25 -- $ sleep 1 00:00:31.667 02:06:21 -- pm/common@21 -- $ date +%s 00:00:31.667 02:06:21 -- pm/common@21 -- $ date +%s 00:00:31.667 02:06:21 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656381 00:00:31.667 02:06:21 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656381 00:00:31.667 02:06:21 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656381 00:00:31.667 02:06:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656381 00:00:31.667 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656381_collect-vmstat.pm.log 00:00:31.667 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656381_collect-cpu-load.pm.log 00:00:31.667 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656381_collect-cpu-temp.pm.log 00:00:31.667 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656381_collect-bmc-pm.bmc.pm.log 00:00:32.604 02:06:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:32.604 02:06:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:32.604 02:06:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:32.604 02:06:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:32.604 02:06:22 -- spdk/autobuild.sh@16 -- $ date -u 00:00:32.604 Thu Jul 11 12:06:22 AM UTC 2024 00:00:32.604 02:06:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:32.604 v24.09-pre-200-g9937c0160 00:00:32.604 02:06:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:32.604 02:06:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:32.604 02:06:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:32.604 02:06:22 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:32.605 02:06:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:32.605 02:06:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.605 ************************************ 00:00:32.605 START TEST ubsan 00:00:32.605 ************************************ 00:00:32.605 02:06:22 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:32.605 using ubsan 00:00:32.605 00:00:32.605 real 0m0.000s 00:00:32.605 user 0m0.000s 00:00:32.605 sys 0m0.000s 00:00:32.605 02:06:22 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:32.605 02:06:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:32.605 ************************************ 00:00:32.605 END TEST ubsan 00:00:32.605 ************************************ 00:00:32.605 02:06:22 -- common/autotest_common.sh@1142 -- $ return 0 00:00:32.605 02:06:22 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:00:32.605 02:06:22 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:32.605 02:06:22 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:32.605 02:06:22 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:32.605 02:06:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:32.605 02:06:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.605 ************************************ 00:00:32.605 START TEST build_native_dpdk 00:00:32.605 ************************************ 00:00:32.605 02:06:23 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/crypto-phy-autotest/dpdk 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/dpdk ]] 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:32.605 02:06:23 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/crypto-phy-autotest/dpdk log --oneline -n 5 00:00:32.865 caf0f5d395 version: 22.11.4 00:00:32.865 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:32.865 dc9c799c7d vhost: fix missing spinlock unlock 00:00:32.865 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:32.865 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 1 -eq 1 ]] 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@104 -- $ intel_ipsec_mb_ver=v0.54 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@105 -- $ intel_ipsec_mb_drv=crypto/aesni_mb 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@106 -- $ intel_ipsec_lib= 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@107 -- $ ge 22.11.4 21.11.0 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '>=' 21.11.0 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=>=' 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:32.865 02:06:23 build_native_dpdk -- scripts/common.sh@364 -- $ return 0 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@112 -- $ intel_ipsec_mb_ver=v1.0 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@113 -- $ intel_ipsec_mb_drv=crypto/ipsec_mb 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@114 -- $ intel_ipsec_lib=lib 00:00:32.865 02:06:23 build_native_dpdk -- common/autobuild_common.sh@116 -- $ git clone --branch v1.0 --depth 1 https://github.com/intel/intel-ipsec-mb.git /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb 00:00:32.865 Cloning into '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb'... 00:00:33.823 Note: switching to 'a1a289dabb23be78d6531de481ba6a417c67b0a5'. 00:00:33.823 00:00:33.823 You are in 'detached HEAD' state. You can look around, make experimental 00:00:33.824 changes and commit them, and you can discard any commits you make in this 00:00:33.824 state without impacting any branches by switching back to a branch. 00:00:33.824 00:00:33.824 If you want to create a new branch to retain commits you create, you may 00:00:33.824 do so (now or later) by using -c with the switch command. Example: 00:00:33.824 00:00:33.824 git switch -c 00:00:33.824 00:00:33.824 Or undo this operation with: 00:00:33.824 00:00:33.824 git switch - 00:00:33.824 00:00:33.824 Turn off this advice by setting config variable advice.detachedHead to false 00:00:33.824 00:00:33.824 02:06:24 build_native_dpdk -- common/autobuild_common.sh@117 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb 00:00:33.824 02:06:24 build_native_dpdk -- common/autobuild_common.sh@118 -- $ make -j72 all SHARED=y EXTRA_CFLAGS=-fPIC 00:00:33.824 make -C lib 00:00:33.824 make[1]: Entering directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:00:34.766 mkdir obj 00:00:34.766 nasm -MD obj/aes_keyexp_128.d -MT obj/aes_keyexp_128.o -o obj/aes_keyexp_128.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_128.asm 00:00:34.766 nasm -MD obj/aes_keyexp_192.d -MT obj/aes_keyexp_192.o -o obj/aes_keyexp_192.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_192.asm 00:00:34.766 nasm -MD obj/aes_keyexp_256.d -MT obj/aes_keyexp_256.o -o obj/aes_keyexp_256.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_256.asm 00:00:34.766 nasm -MD obj/aes_cmac_subkey_gen.d -MT obj/aes_cmac_subkey_gen.o -o obj/aes_cmac_subkey_gen.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_cmac_subkey_gen.asm 00:00:34.766 nasm -MD obj/save_xmms.d -MT obj/save_xmms.o -o obj/save_xmms.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/save_xmms.asm 00:00:34.766 nasm -MD obj/clear_regs_mem_fns.d -MT obj/clear_regs_mem_fns.o -o obj/clear_regs_mem_fns.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/clear_regs_mem_fns.asm 00:00:34.766 nasm -MD obj/const.d -MT obj/const.o -o obj/const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/const.asm 00:00:34.766 nasm -MD obj/aes128_ecbenc_x3.d -MT obj/aes128_ecbenc_x3.o -o obj/aes128_ecbenc_x3.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes128_ecbenc_x3.asm 00:00:34.766 nasm -MD obj/zuc_common.d -MT obj/zuc_common.o -o obj/zuc_common.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/zuc_common.asm 00:00:34.766 nasm -MD obj/wireless_common.d -MT obj/wireless_common.o -o obj/wireless_common.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/wireless_common.asm 00:00:34.766 nasm -MD obj/constant_lookup.d -MT obj/constant_lookup.o -o obj/constant_lookup.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/constant_lookup.asm 00:00:34.766 nasm -MD obj/crc32_refl_const.d -MT obj/crc32_refl_const.o -o obj/crc32_refl_const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/crc32_refl_const.asm 00:00:34.766 nasm -MD obj/crc32_const.d -MT obj/crc32_const.o -o obj/crc32_const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/crc32_const.asm 00:00:34.766 nasm -MD obj/poly1305.d -MT obj/poly1305.o -o obj/poly1305.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/poly1305.asm 00:00:34.766 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/chacha20_poly1305.c -o obj/chacha20_poly1305.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/save_xmms.o.tmp obj/save_xmms.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/const.o.tmp obj/const.o 00:00:34.766 nasm -MD obj/aes128_cbc_dec_by4_sse_no_aesni.d -MT obj/aes128_cbc_dec_by4_sse_no_aesni.o -o obj/aes128_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbc_dec_by4_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/clear_regs_mem_fns.o.tmp obj/clear_regs_mem_fns.o 00:00:34.766 nasm -MD obj/aes192_cbc_dec_by4_sse_no_aesni.d -MT obj/aes192_cbc_dec_by4_sse_no_aesni.o -o obj/aes192_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes192_cbc_dec_by4_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/wireless_common.o.tmp obj/wireless_common.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc32_refl_const.o.tmp obj/crc32_refl_const.o 00:00:34.766 nasm -MD obj/aes256_cbc_dec_by4_sse_no_aesni.d -MT obj/aes256_cbc_dec_by4_sse_no_aesni.o -o obj/aes256_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cbc_dec_by4_sse_no_aesni.asm 00:00:34.766 mv obj/save_xmms.o.tmp obj/save_xmms.o 00:00:34.766 mv obj/const.o.tmp obj/const.o 00:00:34.766 mv obj/clear_regs_mem_fns.o.tmp obj/clear_regs_mem_fns.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc32_const.o.tmp obj/crc32_const.o 00:00:34.766 mv obj/wireless_common.o.tmp obj/wireless_common.o 00:00:34.766 nasm -MD obj/aes_cbc_enc_128_x4_no_aesni.d -MT obj/aes_cbc_enc_128_x4_no_aesni.o -o obj/aes_cbc_enc_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_128_x4_no_aesni.asm 00:00:34.766 mv obj/crc32_refl_const.o.tmp obj/crc32_refl_const.o 00:00:34.766 nasm -MD obj/aes_cbc_enc_192_x4_no_aesni.d -MT obj/aes_cbc_enc_192_x4_no_aesni.o -o obj/aes_cbc_enc_192_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_192_x4_no_aesni.asm 00:00:34.766 mv obj/crc32_const.o.tmp obj/crc32_const.o 00:00:34.766 nasm -MD obj/aes_cbc_enc_256_x4_no_aesni.d -MT obj/aes_cbc_enc_256_x4_no_aesni.o -o obj/aes_cbc_enc_256_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_256_x4_no_aesni.asm 00:00:34.766 nasm -MD obj/aes128_cntr_by8_sse_no_aesni.d -MT obj/aes128_cntr_by8_sse_no_aesni.o -o obj/aes128_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cntr_by8_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/aes192_cntr_by8_sse_no_aesni.d -MT obj/aes192_cntr_by8_sse_no_aesni.o -o obj/aes192_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes192_cntr_by8_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/constant_lookup.o.tmp obj/constant_lookup.o 00:00:34.766 nasm -MD obj/aes256_cntr_by8_sse_no_aesni.d -MT obj/aes256_cntr_by8_sse_no_aesni.o -o obj/aes256_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cntr_by8_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/aes_ecb_by4_sse_no_aesni.d -MT obj/aes_ecb_by4_sse_no_aesni.o -o obj/aes_ecb_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_ecb_by4_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/aes128_cntr_ccm_by8_sse_no_aesni.d -MT obj/aes128_cntr_ccm_by8_sse_no_aesni.o -o obj/aes128_cntr_ccm_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cntr_ccm_by8_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/aes256_cntr_ccm_by8_sse_no_aesni.d -MT obj/aes256_cntr_ccm_by8_sse_no_aesni.o -o obj/aes256_cntr_ccm_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cntr_ccm_by8_sse_no_aesni.asm 00:00:34.766 mv obj/constant_lookup.o.tmp obj/constant_lookup.o 00:00:34.766 nasm -MD obj/pon_sse_no_aesni.d -MT obj/pon_sse_no_aesni.o -o obj/pon_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/pon_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/zuc_sse_no_aesni.d -MT obj/zuc_sse_no_aesni.o -o obj/zuc_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/zuc_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/aes_cfb_sse_no_aesni.d -MT obj/aes_cfb_sse_no_aesni.o -o obj/aes_cfb_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cfb_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/aes128_cbc_mac_x4_no_aesni.d -MT obj/aes128_cbc_mac_x4_no_aesni.o -o obj/aes128_cbc_mac_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbc_mac_x4_no_aesni.asm 00:00:34.766 nasm -MD obj/aes256_cbc_mac_x4_no_aesni.d -MT obj/aes256_cbc_mac_x4_no_aesni.o -o obj/aes256_cbc_mac_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cbc_mac_x4_no_aesni.asm 00:00:34.766 nasm -MD obj/aes_xcbc_mac_128_x4_no_aesni.d -MT obj/aes_xcbc_mac_128_x4_no_aesni.o -o obj/aes_xcbc_mac_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_xcbc_mac_128_x4_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_flush_sse_no_aesni.o -o obj/mb_mgr_aes_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_flush_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/poly1305.o.tmp obj/poly1305.o 00:00:34.766 nasm -MD obj/mb_mgr_aes_submit_sse_no_aesni.d -MT obj/mb_mgr_aes_submit_sse_no_aesni.o -o obj/mb_mgr_aes_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_submit_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes192_flush_sse_no_aesni.d -MT obj/mb_mgr_aes192_flush_sse_no_aesni.o -o obj/mb_mgr_aes192_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes192_flush_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes192_submit_sse_no_aesni.d -MT obj/mb_mgr_aes192_submit_sse_no_aesni.o -o obj/mb_mgr_aes192_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes192_submit_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes256_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_flush_sse_no_aesni.asm 00:00:34.766 mv obj/poly1305.o.tmp obj/poly1305.o 00:00:34.766 nasm -MD obj/mb_mgr_aes256_submit_sse_no_aesni.d -MT obj/mb_mgr_aes256_submit_sse_no_aesni.o -o obj/mb_mgr_aes256_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_submit_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o -o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_xcbc_flush_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.d -MT obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o -o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_xcbc_submit_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/mb_mgr_zuc_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o -o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_zuc_submit_flush_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/ethernet_fcs_sse_no_aesni.d -MT obj/ethernet_fcs_sse_no_aesni.o -o obj/ethernet_fcs_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/ethernet_fcs_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/crc16_x25_sse_no_aesni.d -MT obj/crc16_x25_sse_no_aesni.o -o obj/crc16_x25_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc16_x25_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/aes_cbcs_1_9_enc_128_x4_no_aesni.d -MT obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o -o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbcs_1_9_enc_128_x4_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_submit_sse_no_aesni.o 00:00:34.766 nasm -MD obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.d -MT obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o -o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbcs_1_9_dec_by4_sse_no_aesni.asm 00:00:34.766 mv obj/mb_mgr_aes_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_submit_sse_no_aesni.o 00:00:34.766 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_sse.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes128_cbcs_1_9_submit_sse.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_flush_sse_no_aesni.o 00:00:34.766 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_sse.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes128_cbcs_1_9_flush_sse.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes192_flush_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/ethernet_fcs_sse_no_aesni.o.tmp obj/ethernet_fcs_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc16_x25_sse_no_aesni.o.tmp obj/crc16_x25_sse_no_aesni.o 00:00:34.766 mv obj/mb_mgr_aes_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_flush_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_flush_sse_no_aesni.o 00:00:34.766 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.asm 00:00:34.766 mv obj/mb_mgr_aes192_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes192_flush_sse_no_aesni.o 00:00:34.766 mv obj/ethernet_fcs_sse_no_aesni.o.tmp obj/ethernet_fcs_sse_no_aesni.o 00:00:34.766 mv obj/crc16_x25_sse_no_aesni.o.tmp obj/crc16_x25_sse_no_aesni.o 00:00:34.766 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes256_submit_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes192_submit_sse_no_aesni.o 00:00:34.766 mv obj/mb_mgr_aes256_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_flush_sse_no_aesni.o 00:00:34.766 nasm -MD obj/crc32_refl_by8_sse_no_aesni.d -MT obj/crc32_refl_by8_sse_no_aesni.o -o obj/crc32_refl_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_refl_by8_sse_no_aesni.asm 00:00:34.766 mv obj/mb_mgr_aes256_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes256_submit_sse_no_aesni.o 00:00:34.766 mv obj/mb_mgr_aes192_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes192_submit_sse_no_aesni.o 00:00:34.766 nasm -MD obj/crc32_by8_sse_no_aesni.d -MT obj/crc32_by8_sse_no_aesni.o -o obj/crc32_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_by8_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/crc32_sctp_sse_no_aesni.d -MT obj/crc32_sctp_sse_no_aesni.o -o obj/crc32_sctp_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_sctp_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/crc32_lte_sse_no_aesni.d -MT obj/crc32_lte_sse_no_aesni.o -o obj/crc32_lte_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_lte_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/crc32_fp_sse_no_aesni.d -MT obj/crc32_fp_sse_no_aesni.o -o obj/crc32_fp_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_fp_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/crc32_iuup_sse_no_aesni.d -MT obj/crc32_iuup_sse_no_aesni.o -o obj/crc32_iuup_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_iuup_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/crc32_wimax_sse_no_aesni.d -MT obj/crc32_wimax_sse_no_aesni.o -o obj/crc32_wimax_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_wimax_sse_no_aesni.asm 00:00:34.766 nasm -MD obj/gcm128_sse_no_aesni.d -MT obj/gcm128_sse_no_aesni.o -o obj/gcm128_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm128_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc32_sctp_sse_no_aesni.o.tmp obj/crc32_sctp_sse_no_aesni.o 00:00:34.766 nasm -MD obj/gcm192_sse_no_aesni.d -MT obj/gcm192_sse_no_aesni.o -o obj/gcm192_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm192_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc32_lte_sse_no_aesni.o.tmp obj/crc32_lte_sse_no_aesni.o 00:00:34.766 mv obj/crc32_sctp_sse_no_aesni.o.tmp obj/crc32_sctp_sse_no_aesni.o 00:00:34.766 nasm -MD obj/gcm256_sse_no_aesni.d -MT obj/gcm256_sse_no_aesni.o -o obj/gcm256_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm256_sse_no_aesni.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/aes_cmac_subkey_gen.o.tmp obj/aes_cmac_subkey_gen.o 00:00:34.766 nasm -MD obj/aes128_cbc_dec_by4_sse.d -MT obj/aes128_cbc_dec_by4_sse.o -o obj/aes128_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_dec_by4_sse.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o 00:00:34.766 nasm -MD obj/aes128_cbc_dec_by8_sse.d -MT obj/aes128_cbc_dec_by8_sse.o -o obj/aes128_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_dec_by8_sse.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc32_fp_sse_no_aesni.o.tmp obj/crc32_fp_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc32_iuup_sse_no_aesni.o.tmp obj/crc32_iuup_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/crc32_wimax_sse_no_aesni.o.tmp obj/crc32_wimax_sse_no_aesni.o 00:00:34.766 mv obj/crc32_lte_sse_no_aesni.o.tmp obj/crc32_lte_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/aes_keyexp_192.o.tmp obj/aes_keyexp_192.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o 00:00:34.766 mv obj/aes_cmac_subkey_gen.o.tmp obj/aes_cmac_subkey_gen.o 00:00:34.766 mv obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o 00:00:34.766 mv obj/crc32_fp_sse_no_aesni.o.tmp obj/crc32_fp_sse_no_aesni.o 00:00:34.766 mv obj/crc32_wimax_sse_no_aesni.o.tmp obj/crc32_wimax_sse_no_aesni.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by4_sse.o.tmp obj/aes128_cbc_dec_by4_sse.o 00:00:34.766 mv obj/crc32_iuup_sse_no_aesni.o.tmp obj/crc32_iuup_sse_no_aesni.o 00:00:34.766 mv obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o 00:00:34.766 mv obj/aes_keyexp_192.o.tmp obj/aes_keyexp_192.o 00:00:34.766 mv obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o 00:00:34.766 nasm -MD obj/aes192_cbc_dec_by4_sse.d -MT obj/aes192_cbc_dec_by4_sse.o -o obj/aes192_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cbc_dec_by4_sse.asm 00:00:34.766 mv obj/aes128_cbc_dec_by4_sse.o.tmp obj/aes128_cbc_dec_by4_sse.o 00:00:34.766 nasm -MD obj/aes192_cbc_dec_by8_sse.d -MT obj/aes192_cbc_dec_by8_sse.o -o obj/aes192_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cbc_dec_by8_sse.asm 00:00:34.766 nasm -MD obj/aes256_cbc_dec_by4_sse.d -MT obj/aes256_cbc_dec_by4_sse.o -o obj/aes256_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_dec_by4_sse.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/aes_keyexp_128.o.tmp obj/aes_keyexp_128.o 00:00:34.766 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o 00:00:34.766 nasm -MD obj/aes256_cbc_dec_by8_sse.d -MT obj/aes256_cbc_dec_by8_sse.o -o obj/aes256_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_dec_by8_sse.asm 00:00:34.766 mv obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o 00:00:34.766 nasm -MD obj/aes_cbc_enc_128_x4.d -MT obj/aes_cbc_enc_128_x4.o -o obj/aes_cbc_enc_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_128_x4.asm 00:00:34.766 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by8_sse.o.tmp obj/aes128_cbc_dec_by8_sse.o 00:00:34.766 mv obj/aes_keyexp_128.o.tmp obj/aes_keyexp_128.o 00:00:34.766 mv obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o 00:00:34.766 nasm -MD obj/aes_cbc_enc_192_x4.d -MT obj/aes_cbc_enc_192_x4.o -o obj/aes_cbc_enc_192_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_192_x4.asm 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes_keyexp_256.o.tmp obj/aes_keyexp_256.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes128_ecbenc_x3.o.tmp obj/aes128_ecbenc_x3.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by4_sse.o.tmp obj/aes192_cbc_dec_by4_sse.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by8_sse.o.tmp obj/aes192_cbc_dec_by8_sse.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by4_sse.o.tmp obj/aes256_cbc_dec_by4_sse.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by8_sse.o.tmp obj/aes256_cbc_dec_by8_sse.o 00:00:35.026 mv obj/aes128_cbc_dec_by8_sse.o.tmp obj/aes128_cbc_dec_by8_sse.o 00:00:35.026 nasm -MD obj/aes_cbc_enc_256_x4.d -MT obj/aes_cbc_enc_256_x4.o -o obj/aes_cbc_enc_256_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_256_x4.asm 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x4.o.tmp obj/aes_cbc_enc_192_x4.o 00:00:35.026 mv obj/aes_keyexp_256.o.tmp obj/aes_keyexp_256.o 00:00:35.026 mv obj/aes128_ecbenc_x3.o.tmp obj/aes128_ecbenc_x3.o 00:00:35.026 mv obj/aes192_cbc_dec_by4_sse.o.tmp obj/aes192_cbc_dec_by4_sse.o 00:00:35.026 mv obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o 00:00:35.026 mv obj/aes192_cbc_dec_by8_sse.o.tmp obj/aes192_cbc_dec_by8_sse.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o 00:00:35.026 mv obj/aes256_cbc_dec_by4_sse.o.tmp obj/aes256_cbc_dec_by4_sse.o 00:00:35.026 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x4.o.tmp obj/aes_cbc_enc_128_x4.o 00:00:35.026 mv obj/aes256_cbc_dec_by8_sse.o.tmp obj/aes256_cbc_dec_by8_sse.o 00:00:35.026 nasm -MD obj/aes_cbc_enc_128_x8_sse.d -MT obj/aes_cbc_enc_128_x8_sse.o -o obj/aes_cbc_enc_128_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_128_x8_sse.asm 00:00:35.026 mv obj/aes_cbc_enc_192_x4.o.tmp obj/aes_cbc_enc_192_x4.o 00:00:35.026 mv obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o 00:00:35.026 nasm -MD obj/aes_cbc_enc_192_x8_sse.d -MT obj/aes_cbc_enc_192_x8_sse.o -o obj/aes_cbc_enc_192_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_192_x8_sse.asm 00:00:35.026 mv obj/aes_cbc_enc_128_x4.o.tmp obj/aes_cbc_enc_128_x4.o 00:00:35.026 nasm -MD obj/aes_cbc_enc_256_x8_sse.d -MT obj/aes_cbc_enc_256_x8_sse.o -o obj/aes_cbc_enc_256_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_256_x8_sse.asm 00:00:35.026 nasm -MD obj/pon_sse.d -MT obj/pon_sse.o -o obj/pon_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/pon_sse.asm 00:00:35.026 nasm -MD obj/aes128_cntr_by8_sse.d -MT obj/aes128_cntr_by8_sse.o -o obj/aes128_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cntr_by8_sse.asm 00:00:35.026 nasm -MD obj/aes192_cntr_by8_sse.d -MT obj/aes192_cntr_by8_sse.o -o obj/aes192_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cntr_by8_sse.asm 00:00:35.027 nasm -MD obj/aes256_cntr_by8_sse.d -MT obj/aes256_cntr_by8_sse.o -o obj/aes256_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cntr_by8_sse.asm 00:00:35.027 nasm -MD obj/aes_ecb_by4_sse.d -MT obj/aes_ecb_by4_sse.o -o obj/aes_ecb_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_ecb_by4_sse.asm 00:00:35.027 nasm -MD obj/aes128_cntr_ccm_by8_sse.d -MT obj/aes128_cntr_ccm_by8_sse.o -o obj/aes128_cntr_ccm_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cntr_ccm_by8_sse.asm 00:00:35.027 nasm -MD obj/aes256_cntr_ccm_by8_sse.d -MT obj/aes256_cntr_ccm_by8_sse.o -o obj/aes256_cntr_ccm_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cntr_ccm_by8_sse.asm 00:00:35.027 nasm -MD obj/aes_cfb_sse.d -MT obj/aes_cfb_sse.o -o obj/aes_cfb_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cfb_sse.asm 00:00:35.027 nasm -MD obj/aes128_cbc_mac_x4.d -MT obj/aes128_cbc_mac_x4.o -o obj/aes128_cbc_mac_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_mac_x4.asm 00:00:35.027 nasm -MD obj/aes256_cbc_mac_x4.d -MT obj/aes256_cbc_mac_x4.o -o obj/aes256_cbc_mac_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_mac_x4.asm 00:00:35.027 nasm -MD obj/aes128_cbc_mac_x8_sse.d -MT obj/aes128_cbc_mac_x8_sse.o -o obj/aes128_cbc_mac_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_mac_x8_sse.asm 00:00:35.027 nasm -MD obj/aes256_cbc_mac_x8_sse.d -MT obj/aes256_cbc_mac_x8_sse.o -o obj/aes256_cbc_mac_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_mac_x8_sse.asm 00:00:35.027 nasm -MD obj/aes_xcbc_mac_128_x4.d -MT obj/aes_xcbc_mac_128_x4.o -o obj/aes_xcbc_mac_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_xcbc_mac_128_x4.asm 00:00:35.027 nasm -MD obj/md5_x4x2_sse.d -MT obj/md5_x4x2_sse.o -o obj/md5_x4x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/md5_x4x2_sse.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes_cfb_sse_no_aesni.o.tmp obj/aes_cfb_sse_no_aesni.o 00:00:35.027 nasm -MD obj/sha1_mult_sse.d -MT obj/sha1_mult_sse.o -o obj/sha1_mult_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_mult_sse.asm 00:00:35.027 nasm -MD obj/sha1_one_block_sse.d -MT obj/sha1_one_block_sse.o -o obj/sha1_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_one_block_sse.asm 00:00:35.027 nasm -MD obj/sha224_one_block_sse.d -MT obj/sha224_one_block_sse.o -o obj/sha224_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha224_one_block_sse.asm 00:00:35.027 nasm -MD obj/sha256_one_block_sse.d -MT obj/sha256_one_block_sse.o -o obj/sha256_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha256_one_block_sse.asm 00:00:35.027 mv obj/aes_cfb_sse_no_aesni.o.tmp obj/aes_cfb_sse_no_aesni.o 00:00:35.027 nasm -MD obj/sha384_one_block_sse.d -MT obj/sha384_one_block_sse.o -o obj/sha384_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha384_one_block_sse.asm 00:00:35.027 nasm -MD obj/sha512_one_block_sse.d -MT obj/sha512_one_block_sse.o -o obj/sha512_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha512_one_block_sse.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o 00:00:35.027 nasm -MD obj/sha512_x2_sse.d -MT obj/sha512_x2_sse.o -o obj/sha512_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha512_x2_sse.asm 00:00:35.027 nasm -MD obj/sha_256_mult_sse.d -MT obj/sha_256_mult_sse.o -o obj/sha_256_mult_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha_256_mult_sse.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x4.o.tmp obj/aes_cbc_enc_256_x4.o 00:00:35.027 nasm -MD obj/sha1_ni_x2_sse.d -MT obj/sha1_ni_x2_sse.o -o obj/sha1_ni_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_ni_x2_sse.asm 00:00:35.027 mv obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o 00:00:35.027 nasm -MD obj/sha256_ni_x2_sse.d -MT obj/sha256_ni_x2_sse.o -o obj/sha256_ni_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha256_ni_x2_sse.asm 00:00:35.027 nasm -MD obj/zuc_sse.d -MT obj/zuc_sse.o -o obj/zuc_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/zuc_sse.asm 00:00:35.027 mv obj/aes_cbc_enc_256_x4.o.tmp obj/aes_cbc_enc_256_x4.o 00:00:35.027 nasm -MD obj/zuc_sse_gfni.d -MT obj/zuc_sse_gfni.o -o obj/zuc_sse_gfni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/zuc_sse_gfni.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes_flush_sse.d -MT obj/mb_mgr_aes_flush_sse.o -o obj/mb_mgr_aes_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_flush_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes_submit_sse.d -MT obj/mb_mgr_aes_submit_sse.o -o obj/mb_mgr_aes_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_submit_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes192_flush_sse.d -MT obj/mb_mgr_aes192_flush_sse.o -o obj/mb_mgr_aes192_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_flush_sse.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x8_sse.o.tmp obj/aes_cbc_enc_192_x8_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_aes192_submit_sse.d -MT obj/mb_mgr_aes192_submit_sse.o -o obj/mb_mgr_aes192_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_submit_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes256_flush_sse.d -MT obj/mb_mgr_aes256_flush_sse.o -o obj/mb_mgr_aes256_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_flush_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes256_submit_sse.d -MT obj/mb_mgr_aes256_submit_sse.o -o obj/mb_mgr_aes256_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_submit_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes_flush_sse_x8.d -MT obj/mb_mgr_aes_flush_sse_x8.o -o obj/mb_mgr_aes_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_flush_sse_x8.asm 00:00:35.027 mv obj/aes_cbc_enc_192_x8_sse.o.tmp obj/aes_cbc_enc_192_x8_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_aes_submit_sse_x8.d -MT obj/mb_mgr_aes_submit_sse_x8.o -o obj/mb_mgr_aes_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_submit_sse_x8.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes_cfb_sse.o.tmp obj/aes_cfb_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_aes192_flush_sse_x8.d -MT obj/mb_mgr_aes192_flush_sse_x8.o -o obj/mb_mgr_aes192_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_flush_sse_x8.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes192_submit_sse_x8.d -MT obj/mb_mgr_aes192_submit_sse_x8.o -o obj/mb_mgr_aes192_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_submit_sse_x8.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x8_sse.o.tmp obj/aes_cbc_enc_128_x8_sse.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x8_sse.o.tmp obj/aes_cbc_enc_256_x8_sse.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/sha256_one_block_sse.o.tmp obj/sha256_one_block_sse.o 00:00:35.027 mv obj/aes_cfb_sse.o.tmp obj/aes_cfb_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_aes256_flush_sse_x8.d -MT obj/mb_mgr_aes256_flush_sse_x8.o -o obj/mb_mgr_aes256_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_flush_sse_x8.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes256_submit_sse_x8.d -MT obj/mb_mgr_aes256_submit_sse_x8.o -o obj/mb_mgr_aes256_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_submit_sse_x8.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/sha384_one_block_sse.o.tmp obj/sha384_one_block_sse.o 00:00:35.027 mv obj/aes_cbc_enc_128_x8_sse.o.tmp obj/aes_cbc_enc_128_x8_sse.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x4.o.tmp obj/aes256_cbc_mac_x4.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/sha224_one_block_sse.o.tmp obj/sha224_one_block_sse.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/sha512_one_block_sse.o.tmp obj/sha512_one_block_sse.o 00:00:35.027 mv obj/aes_cbc_enc_256_x8_sse.o.tmp obj/aes_cbc_enc_256_x8_sse.o 00:00:35.027 mv obj/sha256_one_block_sse.o.tmp obj/sha256_one_block_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse.o -o obj/mb_mgr_aes_cmac_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_cmac_submit_flush_sse.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x4.o.tmp obj/aes128_cbc_mac_x4.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/sha1_one_block_sse.o.tmp obj/sha1_one_block_sse.o 00:00:35.027 mv obj/sha384_one_block_sse.o.tmp obj/sha384_one_block_sse.o 00:00:35.027 mv obj/aes256_cbc_mac_x4.o.tmp obj/aes256_cbc_mac_x4.o 00:00:35.027 mv obj/sha224_one_block_sse.o.tmp obj/sha224_one_block_sse.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x8_sse.o.tmp obj/aes128_cbc_mac_x8_sse.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x8_sse.o.tmp obj/aes256_cbc_mac_x8_sse.o 00:00:35.027 mv obj/sha512_one_block_sse.o.tmp obj/sha512_one_block_sse.o 00:00:35.027 mv obj/aes128_cbc_mac_x4.o.tmp obj/aes128_cbc_mac_x4.o 00:00:35.027 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_cmac_submit_flush_sse.asm 00:00:35.027 mv obj/sha1_one_block_sse.o.tmp obj/sha1_one_block_sse.o 00:00:35.027 ld -r -z ibt -z shstk -o obj/sha1_ni_x2_sse.o.tmp obj/sha1_ni_x2_sse.o 00:00:35.027 mv obj/aes128_cbc_mac_x8_sse.o.tmp obj/aes128_cbc_mac_x8_sse.o 00:00:35.027 mv obj/aes256_cbc_mac_x8_sse.o.tmp obj/aes256_cbc_mac_x8_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse_x8.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o -o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_cmac_submit_flush_sse_x8.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_cmac_submit_flush_sse_x8.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_ccm_auth_submit_flush_sse.asm 00:00:35.027 mv obj/sha1_ni_x2_sse.o.tmp obj/sha1_ni_x2_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_ccm_auth_submit_flush_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes_xcbc_flush_sse.d -MT obj/mb_mgr_aes_xcbc_flush_sse.o -o obj/mb_mgr_aes_xcbc_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_xcbc_flush_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_aes_xcbc_submit_sse.d -MT obj/mb_mgr_aes_xcbc_submit_sse.o -o obj/mb_mgr_aes_xcbc_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_xcbc_submit_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_hmac_md5_flush_sse.d -MT obj/mb_mgr_hmac_md5_flush_sse.o -o obj/mb_mgr_hmac_md5_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_md5_flush_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_hmac_md5_submit_sse.d -MT obj/mb_mgr_hmac_md5_submit_sse.o -o obj/mb_mgr_hmac_md5_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_md5_submit_sse.asm 00:00:35.027 nasm -MD obj/mb_mgr_hmac_flush_sse.d -MT obj/mb_mgr_hmac_flush_sse.o -o obj/mb_mgr_hmac_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_flush_sse.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/sha256_ni_x2_sse.o.tmp obj/sha256_ni_x2_sse.o 00:00:35.027 nasm -MD obj/mb_mgr_hmac_submit_sse.d -MT obj/mb_mgr_hmac_submit_sse.o -o obj/mb_mgr_hmac_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_submit_sse.asm 00:00:35.027 ld -r -z ibt -z shstk -o obj/aes_ecb_by4_sse.o.tmp obj/aes_ecb_by4_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse.o.tmp obj/mb_mgr_aes256_flush_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_224_flush_sse.d -MT obj/mb_mgr_hmac_sha_224_flush_sse.o -o obj/mb_mgr_hmac_sha_224_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_flush_sse.asm 00:00:35.028 mv obj/sha256_ni_x2_sse.o.tmp obj/sha256_ni_x2_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_224_submit_sse.d -MT obj/mb_mgr_hmac_sha_224_submit_sse.o -o obj/mb_mgr_hmac_sha_224_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_submit_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse.o.tmp obj/mb_mgr_aes192_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse_x8.o.tmp obj/mb_mgr_aes_flush_sse_x8.o 00:00:35.028 mv obj/aes_ecb_by4_sse.o.tmp obj/aes_ecb_by4_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse.o.tmp obj/mb_mgr_aes_flush_sse.o 00:00:35.028 mv obj/mb_mgr_aes256_flush_sse.o.tmp obj/mb_mgr_aes256_flush_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_256_flush_sse.d -MT obj/mb_mgr_hmac_sha_256_flush_sse.o -o obj/mb_mgr_hmac_sha_256_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_flush_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse.o.tmp obj/mb_mgr_aes_submit_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse.o.tmp obj/mb_mgr_aes192_submit_sse.o 00:00:35.028 mv obj/mb_mgr_aes192_flush_sse.o.tmp obj/mb_mgr_aes192_flush_sse.o 00:00:35.028 mv obj/mb_mgr_aes_flush_sse_x8.o.tmp obj/mb_mgr_aes_flush_sse_x8.o 00:00:35.028 mv obj/mb_mgr_aes_flush_sse.o.tmp obj/mb_mgr_aes_flush_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_256_submit_sse.d -MT obj/mb_mgr_hmac_sha_256_submit_sse.o -o obj/mb_mgr_hmac_sha_256_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_submit_sse.asm 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_384_flush_sse.d -MT obj/mb_mgr_hmac_sha_384_flush_sse.o -o obj/mb_mgr_hmac_sha_384_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_384_flush_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse.o.tmp obj/mb_mgr_aes256_submit_sse.o 00:00:35.028 mv obj/mb_mgr_aes_submit_sse.o.tmp obj/mb_mgr_aes_submit_sse.o 00:00:35.028 mv obj/mb_mgr_aes192_submit_sse.o.tmp obj/mb_mgr_aes192_submit_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_384_submit_sse.d -MT obj/mb_mgr_hmac_sha_384_submit_sse.o -o obj/mb_mgr_hmac_sha_384_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_384_submit_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse_x8.o.tmp obj/mb_mgr_aes_submit_sse_x8.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_512_flush_sse.d -MT obj/mb_mgr_hmac_sha_512_flush_sse.o -o obj/mb_mgr_hmac_sha_512_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_512_flush_sse.asm 00:00:35.028 mv obj/mb_mgr_aes256_submit_sse.o.tmp obj/mb_mgr_aes256_submit_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_512_submit_sse.d -MT obj/mb_mgr_hmac_sha_512_submit_sse.o -o obj/mb_mgr_hmac_sha_512_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_512_submit_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse_x8.o.tmp obj/mb_mgr_aes192_flush_sse_x8.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse_x8.o.tmp obj/mb_mgr_aes192_submit_sse_x8.o 00:00:35.028 mv obj/mb_mgr_aes_submit_sse_x8.o.tmp obj/mb_mgr_aes_submit_sse_x8.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_flush_ni_sse.d -MT obj/mb_mgr_hmac_flush_ni_sse.o -o obj/mb_mgr_hmac_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_flush_ni_sse.asm 00:00:35.028 nasm -MD obj/mb_mgr_hmac_submit_ni_sse.d -MT obj/mb_mgr_hmac_submit_ni_sse.o -o obj/mb_mgr_hmac_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_submit_ni_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse_x8.o.tmp obj/mb_mgr_aes256_flush_sse_x8.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_224_flush_ni_sse.d -MT obj/mb_mgr_hmac_sha_224_flush_ni_sse.o -o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_flush_ni_sse.asm 00:00:35.028 mv obj/mb_mgr_aes192_flush_sse_x8.o.tmp obj/mb_mgr_aes192_flush_sse_x8.o 00:00:35.028 mv obj/mb_mgr_aes192_submit_sse_x8.o.tmp obj/mb_mgr_aes192_submit_sse_x8.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_224_submit_ni_sse.d -MT obj/mb_mgr_hmac_sha_224_submit_ni_sse.o -o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_submit_ni_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_sse.o.tmp obj/aes128_cntr_ccm_by8_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse_x8.o.tmp obj/mb_mgr_aes256_submit_sse_x8.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x4.o.tmp obj/aes_xcbc_mac_128_x4.o 00:00:35.028 mv obj/mb_mgr_aes256_flush_sse_x8.o.tmp obj/mb_mgr_aes256_flush_sse_x8.o 00:00:35.028 mv obj/aes128_cntr_ccm_by8_sse.o.tmp obj/aes128_cntr_ccm_by8_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_256_flush_ni_sse.d -MT obj/mb_mgr_hmac_sha_256_flush_ni_sse.o -o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_flush_ni_sse.asm 00:00:35.028 nasm -MD obj/mb_mgr_hmac_sha_256_submit_ni_sse.d -MT obj/mb_mgr_hmac_sha_256_submit_ni_sse.o -o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_submit_ni_sse.asm 00:00:35.028 mv obj/mb_mgr_aes256_submit_sse_x8.o.tmp obj/mb_mgr_aes256_submit_sse_x8.o 00:00:35.028 mv obj/aes_xcbc_mac_128_x4.o.tmp obj/aes_xcbc_mac_128_x4.o 00:00:35.028 nasm -MD obj/mb_mgr_zuc_submit_flush_sse.d -MT obj/mb_mgr_zuc_submit_flush_sse.o -o obj/mb_mgr_zuc_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_zuc_submit_flush_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_sse.o.tmp obj/aes256_cntr_ccm_by8_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_sse.o.tmp obj/mb_mgr_hmac_md5_flush_sse.o 00:00:35.028 nasm -MD obj/mb_mgr_zuc_submit_flush_gfni_sse.d -MT obj/mb_mgr_zuc_submit_flush_gfni_sse.o -o obj/mb_mgr_zuc_submit_flush_gfni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_zuc_submit_flush_gfni_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/sha512_x2_sse.o.tmp obj/sha512_x2_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_sse.o.tmp obj/mb_mgr_aes_xcbc_flush_sse.o 00:00:35.028 mv obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o 00:00:35.028 mv obj/aes256_cntr_ccm_by8_sse.o.tmp obj/aes256_cntr_ccm_by8_sse.o 00:00:35.028 nasm -MD obj/ethernet_fcs_sse.d -MT obj/ethernet_fcs_sse.o -o obj/ethernet_fcs_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/ethernet_fcs_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_md5_flush_sse.o.tmp obj/mb_mgr_hmac_md5_flush_sse.o 00:00:35.028 mv obj/sha512_x2_sse.o.tmp obj/sha512_x2_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_sse.o.tmp obj/mb_mgr_hmac_flush_sse.o 00:00:35.028 mv obj/mb_mgr_aes_xcbc_flush_sse.o.tmp obj/mb_mgr_aes_xcbc_flush_sse.o 00:00:35.028 nasm -MD obj/crc16_x25_sse.d -MT obj/crc16_x25_sse.o -o obj/crc16_x25_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc16_x25_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_sse.o.tmp obj/mb_mgr_aes_xcbc_submit_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_flush_sse.o.tmp obj/mb_mgr_hmac_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/sha1_mult_sse.o.tmp obj/sha1_mult_sse.o 00:00:35.028 nasm -MD obj/crc32_sctp_sse.d -MT obj/crc32_sctp_sse.o -o obj/crc32_sctp_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_sctp_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_sse.o.tmp obj/mb_mgr_hmac_md5_submit_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/ethernet_fcs_sse.o.tmp obj/ethernet_fcs_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_sha_256_flush_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_sse.o 00:00:35.028 mv obj/mb_mgr_aes_xcbc_submit_sse.o.tmp obj/mb_mgr_aes_xcbc_submit_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_sha_224_flush_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/crc16_x25_sse.o.tmp obj/crc16_x25_sse.o 00:00:35.028 mv obj/sha1_mult_sse.o.tmp obj/sha1_mult_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_md5_submit_sse.o.tmp obj/mb_mgr_hmac_md5_submit_sse.o 00:00:35.028 mv obj/ethernet_fcs_sse.o.tmp obj/ethernet_fcs_sse.o 00:00:35.028 nasm -MD obj/aes_cbcs_1_9_enc_128_x4.d -MT obj/aes_cbcs_1_9_enc_128_x4.o -o obj/aes_cbcs_1_9_enc_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbcs_1_9_enc_128_x4.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/crc32_sctp_sse.o.tmp obj/crc32_sctp_sse.o 00:00:35.028 mv obj/crc16_x25_sse.o.tmp obj/crc16_x25_sse.o 00:00:35.028 nasm -MD obj/aes128_cbcs_1_9_dec_by4_sse.d -MT obj/aes128_cbcs_1_9_dec_by4_sse.o -o obj/aes128_cbcs_1_9_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbcs_1_9_dec_by4_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_ni_sse.o.tmp obj/mb_mgr_hmac_flush_ni_sse.o 00:00:35.028 mv obj/crc32_sctp_sse.o.tmp obj/crc32_sctp_sse.o 00:00:35.028 nasm -MD obj/crc32_refl_by8_sse.d -MT obj/crc32_refl_by8_sse.o -o obj/crc32_refl_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_refl_by8_sse.asm 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_ni_sse.o 00:00:35.028 mv obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o 00:00:35.028 mv obj/mb_mgr_hmac_sha_256_submit_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_sse.o.tmp obj/mb_mgr_hmac_submit_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_sse.o.tmp obj/mb_mgr_hmac_sha_512_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_sse.o.tmp obj/aes128_cntr_by8_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_flush_ni_sse.o.tmp obj/mb_mgr_hmac_flush_ni_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_sha_224_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_ni_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_sse.o.tmp obj/aes256_cntr_by8_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_sse.o.tmp obj/mb_mgr_hmac_sha_384_submit_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_ni_sse.o.tmp obj/mb_mgr_hmac_submit_ni_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_sse.o.tmp obj/mb_mgr_hmac_sha_384_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_ni_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_ni_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_ni_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by4_sse.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse.o 00:00:35.028 mv obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o 00:00:35.028 mv obj/mb_mgr_hmac_submit_sse.o.tmp obj/mb_mgr_hmac_submit_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_sha_512_flush_sse.o.tmp obj/mb_mgr_hmac_sha_512_flush_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse.o 00:00:35.028 mv obj/aes128_cntr_by8_sse.o.tmp obj/aes128_cntr_by8_sse.o 00:00:35.028 ld -r -z ibt -z shstk -o obj/pon_sse.o.tmp obj/pon_sse.o 00:00:35.028 mv obj/aes256_cntr_by8_sse.o.tmp obj/aes256_cntr_by8_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_sha_224_submit_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_sha_384_submit_sse.o.tmp obj/mb_mgr_hmac_sha_384_submit_sse.o 00:00:35.028 mv obj/mb_mgr_hmac_submit_ni_sse.o.tmp obj/mb_mgr_hmac_submit_ni_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_sse.o.tmp obj/aes192_cntr_by8_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x4.o.tmp obj/aes_cbcs_1_9_enc_128_x4.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/crc32_refl_by8_sse_no_aesni.o.tmp obj/crc32_refl_by8_sse_no_aesni.o 00:00:35.029 mv obj/mb_mgr_hmac_sha_384_flush_sse.o.tmp obj/mb_mgr_hmac_sha_384_flush_sse.o 00:00:35.029 mv obj/mb_mgr_hmac_sha_256_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_ni_sse.o 00:00:35.029 mv obj/mb_mgr_hmac_sha_256_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_ni_sse.o 00:00:35.029 mv obj/mb_mgr_hmac_sha_224_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_ni_sse.o 00:00:35.029 mv obj/mb_mgr_aes_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse.o 00:00:35.029 mv obj/aes128_cbcs_1_9_dec_by4_sse.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/crc32_refl_by8_sse.o.tmp obj/crc32_refl_by8_sse.o 00:00:35.029 mv obj/mb_mgr_aes256_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_sse.o.tmp obj/mb_mgr_hmac_sha_512_submit_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o 00:00:35.029 mv obj/pon_sse.o.tmp obj/pon_sse.o 00:00:35.029 mv obj/aes192_cntr_by8_sse.o.tmp obj/aes192_cntr_by8_sse.o 00:00:35.029 mv obj/aes_cbcs_1_9_enc_128_x4.o.tmp obj/aes_cbcs_1_9_enc_128_x4.o 00:00:35.029 mv obj/crc32_refl_by8_sse_no_aesni.o.tmp obj/crc32_refl_by8_sse_no_aesni.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o 00:00:35.029 mv obj/crc32_refl_by8_sse.o.tmp obj/crc32_refl_by8_sse.o 00:00:35.029 mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o 00:00:35.029 mv obj/mb_mgr_hmac_sha_512_submit_sse.o.tmp obj/mb_mgr_hmac_sha_512_submit_sse.o 00:00:35.029 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o 00:00:35.029 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o 00:00:35.029 mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o 00:00:35.029 nasm -MD obj/crc32_by8_sse.d -MT obj/crc32_by8_sse.o -o obj/crc32_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_by8_sse.asm 00:00:35.029 nasm -MD obj/crc32_lte_sse.d -MT obj/crc32_lte_sse.o -o obj/crc32_lte_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_lte_sse.asm 00:00:35.029 nasm -MD obj/crc32_fp_sse.d -MT obj/crc32_fp_sse.o -o obj/crc32_fp_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_fp_sse.asm 00:00:35.029 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o 00:00:35.029 nasm -MD obj/crc32_iuup_sse.d -MT obj/crc32_iuup_sse.o -o obj/crc32_iuup_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_iuup_sse.asm 00:00:35.029 nasm -MD obj/crc32_wimax_sse.d -MT obj/crc32_wimax_sse.o -o obj/crc32_wimax_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_wimax_sse.asm 00:00:35.029 nasm -MD obj/chacha20_sse.d -MT obj/chacha20_sse.o -o obj/chacha20_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/chacha20_sse.asm 00:00:35.029 ld -r -z ibt -z shstk -o obj/crc32_lte_sse.o.tmp obj/crc32_lte_sse.o 00:00:35.029 mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/crc32_fp_sse.o.tmp obj/crc32_fp_sse.o 00:00:35.029 nasm -MD obj/memcpy_sse.d -MT obj/memcpy_sse.o -o obj/memcpy_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/memcpy_sse.asm 00:00:35.029 ld -r -z ibt -z shstk -o obj/crc32_by8_sse.o.tmp obj/crc32_by8_sse.o 00:00:35.029 mv obj/crc32_lte_sse.o.tmp obj/crc32_lte_sse.o 00:00:35.029 nasm -MD obj/gcm128_sse.d -MT obj/gcm128_sse.o -o obj/gcm128_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm128_sse.asm 00:00:35.029 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/crc32_iuup_sse.o.tmp obj/crc32_iuup_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/crc32_wimax_sse.o.tmp obj/crc32_wimax_sse.o 00:00:35.029 mv obj/crc32_fp_sse.o.tmp obj/crc32_fp_sse.o 00:00:35.029 ld -r -z ibt -z shstk -o obj/memcpy_sse.o.tmp obj/memcpy_sse.o 00:00:35.029 mv obj/crc32_by8_sse.o.tmp obj/crc32_by8_sse.o 00:00:35.029 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o 00:00:35.029 mv obj/crc32_iuup_sse.o.tmp obj/crc32_iuup_sse.o 00:00:35.029 nasm -MD obj/gcm192_sse.d -MT obj/gcm192_sse.o -o obj/gcm192_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm192_sse.asm 00:00:35.029 mv obj/crc32_wimax_sse.o.tmp obj/crc32_wimax_sse.o 00:00:35.029 mv obj/memcpy_sse.o.tmp obj/memcpy_sse.o 00:00:35.029 nasm -MD obj/gcm256_sse.d -MT obj/gcm256_sse.o -o obj/gcm256_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm256_sse.asm 00:00:35.029 nasm -MD obj/aes_cbc_enc_128_x8.d -MT obj/aes_cbc_enc_128_x8.o -o obj/aes_cbc_enc_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_128_x8.asm 00:00:35.029 nasm -MD obj/aes_cbc_enc_192_x8.d -MT obj/aes_cbc_enc_192_x8.o -o obj/aes_cbc_enc_192_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_192_x8.asm 00:00:35.029 nasm -MD obj/aes_cbc_enc_256_x8.d -MT obj/aes_cbc_enc_256_x8.o -o obj/aes_cbc_enc_256_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_256_x8.asm 00:00:35.029 nasm -MD obj/aes128_cbc_dec_by8_avx.d -MT obj/aes128_cbc_dec_by8_avx.o -o obj/aes128_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbc_dec_by8_avx.asm 00:00:35.029 nasm -MD obj/aes192_cbc_dec_by8_avx.d -MT obj/aes192_cbc_dec_by8_avx.o -o obj/aes192_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes192_cbc_dec_by8_avx.asm 00:00:35.029 nasm -MD obj/aes256_cbc_dec_by8_avx.d -MT obj/aes256_cbc_dec_by8_avx.o -o obj/aes256_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cbc_dec_by8_avx.asm 00:00:35.029 nasm -MD obj/pon_avx.d -MT obj/pon_avx.o -o obj/pon_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/pon_avx.asm 00:00:35.029 nasm -MD obj/aes128_cntr_by8_avx.d -MT obj/aes128_cntr_by8_avx.o -o obj/aes128_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cntr_by8_avx.asm 00:00:35.296 ld -r -z ibt -z shstk -o obj/md5_x4x2_sse.o.tmp obj/md5_x4x2_sse.o 00:00:35.296 nasm -MD obj/aes192_cntr_by8_avx.d -MT obj/aes192_cntr_by8_avx.o -o obj/aes192_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes192_cntr_by8_avx.asm 00:00:35.296 nasm -MD obj/aes256_cntr_by8_avx.d -MT obj/aes256_cntr_by8_avx.o -o obj/aes256_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cntr_by8_avx.asm 00:00:35.296 mv obj/md5_x4x2_sse.o.tmp obj/md5_x4x2_sse.o 00:00:35.296 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x4_no_aesni.o.tmp obj/aes_cbc_enc_128_x4_no_aesni.o 00:00:35.296 nasm -MD obj/aes128_cntr_ccm_by8_avx.d -MT obj/aes128_cntr_ccm_by8_avx.o -o obj/aes128_cntr_ccm_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cntr_ccm_by8_avx.asm 00:00:35.296 ld -r -z ibt -z shstk -o obj/sha_256_mult_sse.o.tmp obj/sha_256_mult_sse.o 00:00:35.296 nasm -MD obj/aes256_cntr_ccm_by8_avx.d -MT obj/aes256_cntr_ccm_by8_avx.o -o obj/aes256_cntr_ccm_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cntr_ccm_by8_avx.asm 00:00:35.296 mv obj/aes_cbc_enc_128_x4_no_aesni.o.tmp obj/aes_cbc_enc_128_x4_no_aesni.o 00:00:35.296 nasm -MD obj/aes_ecb_by4_avx.d -MT obj/aes_ecb_by4_avx.o -o obj/aes_ecb_by4_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_ecb_by4_avx.asm 00:00:35.296 mv obj/sha_256_mult_sse.o.tmp obj/sha_256_mult_sse.o 00:00:35.296 nasm -MD obj/aes_cfb_avx.d -MT obj/aes_cfb_avx.o -o obj/aes_cfb_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cfb_avx.asm 00:00:35.296 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x8.o.tmp obj/aes_cbc_enc_192_x8.o 00:00:35.296 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by8_avx.o.tmp obj/aes128_cbc_dec_by8_avx.o 00:00:35.296 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x8.o.tmp obj/aes_cbc_enc_128_x8.o 00:00:35.296 nasm -MD obj/aes128_cbc_mac_x8.d -MT obj/aes128_cbc_mac_x8.o -o obj/aes128_cbc_mac_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbc_mac_x8.asm 00:00:35.296 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_sse.o.tmp obj/mb_mgr_zuc_submit_flush_sse.o 00:00:35.296 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x4_no_aesni.o.tmp obj/aes_cbc_enc_256_x4_no_aesni.o 00:00:35.296 mv obj/aes_cbc_enc_192_x8.o.tmp obj/aes_cbc_enc_192_x8.o 00:00:35.296 mv obj/aes128_cbc_dec_by8_avx.o.tmp obj/aes128_cbc_dec_by8_avx.o 00:00:35.296 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x8.o.tmp obj/aes_cbc_enc_256_x8.o 00:00:35.296 ld -r -z ibt -z shstk -o obj/crc32_by8_sse_no_aesni.o.tmp obj/crc32_by8_sse_no_aesni.o 00:00:35.296 mv obj/aes_cbc_enc_128_x8.o.tmp obj/aes_cbc_enc_128_x8.o 00:00:35.296 mv obj/mb_mgr_zuc_submit_flush_sse.o.tmp obj/mb_mgr_zuc_submit_flush_sse.o 00:00:35.296 mv obj/aes_cbc_enc_256_x4_no_aesni.o.tmp obj/aes_cbc_enc_256_x4_no_aesni.o 00:00:35.296 nasm -MD obj/aes256_cbc_mac_x8.d -MT obj/aes256_cbc_mac_x8.o -o obj/aes256_cbc_mac_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cbc_mac_x8.asm 00:00:35.296 mv obj/aes_cbc_enc_256_x8.o.tmp obj/aes_cbc_enc_256_x8.o 00:00:35.296 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_gfni_sse.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_sse.o 00:00:35.296 mv obj/crc32_by8_sse_no_aesni.o.tmp obj/crc32_by8_sse_no_aesni.o 00:00:35.296 nasm -MD obj/aes_xcbc_mac_128_x8.d -MT obj/aes_xcbc_mac_128_x8.o -o obj/aes_xcbc_mac_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_xcbc_mac_128_x8.asm 00:00:35.296 nasm -MD obj/md5_x4x2_avx.d -MT obj/md5_x4x2_avx.o -o obj/md5_x4x2_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/md5_x4x2_avx.asm 00:00:35.296 ld -r -z ibt -z shstk -o obj/aes_cfb_avx.o.tmp obj/aes_cfb_avx.o 00:00:35.296 mv obj/mb_mgr_zuc_submit_flush_gfni_sse.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_sse.o 00:00:35.296 nasm -MD obj/sha1_mult_avx.d -MT obj/sha1_mult_avx.o -o obj/sha1_mult_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha1_mult_avx.asm 00:00:35.296 mv obj/aes_cfb_avx.o.tmp obj/aes_cfb_avx.o 00:00:35.296 nasm -MD obj/sha1_one_block_avx.d -MT obj/sha1_one_block_avx.o -o obj/sha1_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha1_one_block_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by8_avx.o.tmp obj/aes192_cbc_dec_by8_avx.o 00:00:35.297 nasm -MD obj/sha224_one_block_avx.d -MT obj/sha224_one_block_avx.o -o obj/sha224_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha224_one_block_avx.asm 00:00:35.297 nasm -MD obj/sha256_one_block_avx.d -MT obj/sha256_one_block_avx.o -o obj/sha256_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha256_one_block_avx.asm 00:00:35.297 mv obj/aes192_cbc_dec_by8_avx.o.tmp obj/aes192_cbc_dec_by8_avx.o 00:00:35.297 nasm -MD obj/sha_256_mult_avx.d -MT obj/sha_256_mult_avx.o -o obj/sha_256_mult_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha_256_mult_avx.asm 00:00:35.297 nasm -MD obj/sha384_one_block_avx.d -MT obj/sha384_one_block_avx.o -o obj/sha384_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha384_one_block_avx.asm 00:00:35.297 nasm -MD obj/sha512_one_block_avx.d -MT obj/sha512_one_block_avx.o -o obj/sha512_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha512_one_block_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x8.o.tmp obj/aes256_cbc_mac_x8.o 00:00:35.297 nasm -MD obj/sha512_x2_avx.d -MT obj/sha512_x2_avx.o -o obj/sha512_x2_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha512_x2_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o.tmp obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o 00:00:35.297 mv obj/aes256_cbc_mac_x8.o.tmp obj/aes256_cbc_mac_x8.o 00:00:35.297 nasm -MD obj/zuc_avx.d -MT obj/zuc_avx.o -o obj/zuc_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/zuc_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by8_avx.o.tmp obj/aes256_cbc_dec_by8_avx.o 00:00:35.297 mv obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o.tmp obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o 00:00:35.297 nasm -MD obj/mb_mgr_aes_flush_avx.d -MT obj/mb_mgr_aes_flush_avx.o -o obj/mb_mgr_aes_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_flush_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x4_no_aesni.o.tmp obj/aes_xcbc_mac_128_x4_no_aesni.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x8.o.tmp obj/aes128_cbc_mac_x8.o 00:00:35.297 mv obj/aes256_cbc_dec_by8_avx.o.tmp obj/aes256_cbc_dec_by8_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes_submit_avx.d -MT obj/mb_mgr_aes_submit_avx.o -o obj/mb_mgr_aes_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_submit_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_avx.o.tmp obj/aes128_cntr_ccm_by8_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/sha1_one_block_avx.o.tmp obj/sha1_one_block_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/sha224_one_block_avx.o.tmp obj/sha224_one_block_avx.o 00:00:35.297 mv obj/aes_xcbc_mac_128_x4_no_aesni.o.tmp obj/aes_xcbc_mac_128_x4_no_aesni.o 00:00:35.297 mv obj/aes128_cbc_mac_x8.o.tmp obj/aes128_cbc_mac_x8.o 00:00:35.297 nasm -MD obj/mb_mgr_aes192_flush_avx.d -MT obj/mb_mgr_aes192_flush_avx.o -o obj/mb_mgr_aes192_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes192_flush_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/sha256_one_block_avx.o.tmp obj/sha256_one_block_avx.o 00:00:35.297 mv obj/aes128_cntr_ccm_by8_avx.o.tmp obj/aes128_cntr_ccm_by8_avx.o 00:00:35.297 mv obj/sha1_one_block_avx.o.tmp obj/sha1_one_block_avx.o 00:00:35.297 mv obj/sha224_one_block_avx.o.tmp obj/sha224_one_block_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/sha512_one_block_avx.o.tmp obj/sha512_one_block_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes192_submit_avx.d -MT obj/mb_mgr_aes192_submit_avx.o -o obj/mb_mgr_aes192_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes192_submit_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_avx.o.tmp obj/aes256_cntr_ccm_by8_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x8.o.tmp obj/aes_xcbc_mac_128_x8.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/sha384_one_block_avx.o.tmp obj/sha384_one_block_avx.o 00:00:35.297 mv obj/sha256_one_block_avx.o.tmp obj/sha256_one_block_avx.o 00:00:35.297 mv obj/sha512_one_block_avx.o.tmp obj/sha512_one_block_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes256_flush_avx.d -MT obj/mb_mgr_aes256_flush_avx.o -o obj/mb_mgr_aes256_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_flush_avx.asm 00:00:35.297 mv obj/aes256_cntr_ccm_by8_avx.o.tmp obj/aes256_cntr_ccm_by8_avx.o 00:00:35.297 mv obj/aes_xcbc_mac_128_x8.o.tmp obj/aes_xcbc_mac_128_x8.o 00:00:35.297 mv obj/sha384_one_block_avx.o.tmp obj/sha384_one_block_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes256_submit_avx.d -MT obj/mb_mgr_aes256_submit_avx.o -o obj/mb_mgr_aes256_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_submit_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_avx.d -MT obj/mb_mgr_aes_cmac_submit_flush_avx.o -o obj/mb_mgr_aes_cmac_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_cmac_submit_flush_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_avx.d -MT obj/mb_mgr_aes256_cmac_submit_flush_avx.o -o obj/mb_mgr_aes256_cmac_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_cmac_submit_flush_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/sha1_mult_avx.o.tmp obj/sha1_mult_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_avx.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_ccm_auth_submit_flush_avx.asm 00:00:35.297 mv obj/sha1_mult_avx.o.tmp obj/sha1_mult_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_avx.o.tmp obj/mb_mgr_aes_flush_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_ccm_auth_submit_flush_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_avx.o.tmp obj/mb_mgr_aes_submit_avx.o 00:00:35.297 mv obj/mb_mgr_aes_flush_avx.o.tmp obj/mb_mgr_aes_flush_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes_xcbc_flush_avx.d -MT obj/mb_mgr_aes_xcbc_flush_avx.o -o obj/mb_mgr_aes_xcbc_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_xcbc_flush_avx.asm 00:00:35.297 mv obj/mb_mgr_aes_submit_avx.o.tmp obj/mb_mgr_aes_submit_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_avx.o.tmp obj/mb_mgr_aes192_flush_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_avx.o.tmp obj/mb_mgr_aes192_submit_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_aes_xcbc_submit_avx.d -MT obj/mb_mgr_aes_xcbc_submit_avx.o -o obj/mb_mgr_aes_xcbc_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_xcbc_submit_avx.asm 00:00:35.297 mv obj/mb_mgr_aes192_flush_avx.o.tmp obj/mb_mgr_aes192_flush_avx.o 00:00:35.297 mv obj/mb_mgr_aes192_submit_avx.o.tmp obj/mb_mgr_aes192_submit_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_hmac_md5_flush_avx.d -MT obj/mb_mgr_hmac_md5_flush_avx.o -o obj/mb_mgr_hmac_md5_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_md5_flush_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_hmac_md5_submit_avx.d -MT obj/mb_mgr_hmac_md5_submit_avx.o -o obj/mb_mgr_hmac_md5_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_md5_submit_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_avx.o.tmp obj/mb_mgr_aes256_flush_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_hmac_flush_avx.d -MT obj/mb_mgr_hmac_flush_avx.o -o obj/mb_mgr_hmac_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_flush_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x4_no_aesni.o.tmp obj/aes_cbc_enc_192_x4_no_aesni.o 00:00:35.297 mv obj/mb_mgr_aes256_flush_avx.o.tmp obj/mb_mgr_aes256_flush_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_hmac_submit_avx.d -MT obj/mb_mgr_hmac_submit_avx.o -o obj/mb_mgr_hmac_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_submit_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx.d -MT obj/mb_mgr_hmac_sha_224_flush_avx.o -o obj/mb_mgr_hmac_sha_224_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_224_flush_avx.asm 00:00:35.297 mv obj/aes_cbc_enc_192_x4_no_aesni.o.tmp obj/aes_cbc_enc_192_x4_no_aesni.o 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx.d -MT obj/mb_mgr_hmac_sha_224_submit_avx.o -o obj/mb_mgr_hmac_sha_224_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_224_submit_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx.d -MT obj/mb_mgr_hmac_sha_256_flush_avx.o -o obj/mb_mgr_hmac_sha_256_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_256_flush_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx.d -MT obj/mb_mgr_hmac_sha_256_submit_avx.o -o obj/mb_mgr_hmac_sha_256_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_256_submit_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx.d -MT obj/mb_mgr_hmac_sha_384_flush_avx.o -o obj/mb_mgr_hmac_sha_384_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_384_flush_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx.d -MT obj/mb_mgr_hmac_sha_384_submit_avx.o -o obj/mb_mgr_hmac_sha_384_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_384_submit_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_avx.o.tmp obj/mb_mgr_aes_xcbc_flush_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx.d -MT obj/mb_mgr_hmac_sha_512_flush_avx.o -o obj/mb_mgr_hmac_sha_512_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_512_flush_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_avx.o.tmp obj/mb_mgr_aes256_submit_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx.d -MT obj/mb_mgr_hmac_sha_512_submit_avx.o -o obj/mb_mgr_hmac_sha_512_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_512_submit_avx.asm 00:00:35.297 mv obj/mb_mgr_aes_xcbc_flush_avx.o.tmp obj/mb_mgr_aes_xcbc_flush_avx.o 00:00:35.297 nasm -MD obj/mb_mgr_zuc_submit_flush_avx.d -MT obj/mb_mgr_zuc_submit_flush_avx.o -o obj/mb_mgr_zuc_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_zuc_submit_flush_avx.asm 00:00:35.297 nasm -MD obj/ethernet_fcs_avx.d -MT obj/ethernet_fcs_avx.o -o obj/ethernet_fcs_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/ethernet_fcs_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_avx.o.tmp obj/mb_mgr_hmac_md5_submit_avx.o 00:00:35.297 mv obj/mb_mgr_aes256_submit_avx.o.tmp obj/mb_mgr_aes256_submit_avx.o 00:00:35.297 nasm -MD obj/crc16_x25_avx.d -MT obj/crc16_x25_avx.o -o obj/crc16_x25_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc16_x25_avx.asm 00:00:35.297 nasm -MD obj/aes_cbcs_1_9_enc_128_x8.d -MT obj/aes_cbcs_1_9_enc_128_x8.o -o obj/aes_cbcs_1_9_enc_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbcs_1_9_enc_128_x8.asm 00:00:35.297 mv obj/mb_mgr_hmac_md5_submit_avx.o.tmp obj/mb_mgr_hmac_md5_submit_avx.o 00:00:35.297 nasm -MD obj/aes128_cbcs_1_9_dec_by8_avx.d -MT obj/aes128_cbcs_1_9_dec_by8_avx.o -o obj/aes128_cbcs_1_9_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbcs_1_9_dec_by8_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_avx.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes128_cbcs_1_9_submit_avx.asm 00:00:35.297 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_avx.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes128_cbcs_1_9_flush_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/sha_256_mult_avx.o.tmp obj/sha_256_mult_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_avx.o.tmp obj/mb_mgr_hmac_md5_flush_avx.o 00:00:35.297 nasm -MD obj/crc32_refl_by8_avx.d -MT obj/crc32_refl_by8_avx.o -o obj/crc32_refl_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_refl_by8_avx.asm 00:00:35.297 nasm -MD obj/crc32_by8_avx.d -MT obj/crc32_by8_avx.o -o obj/crc32_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_by8_avx.asm 00:00:35.297 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o 00:00:35.297 ld -r -z ibt -z shstk -o obj/ethernet_fcs_avx.o.tmp obj/ethernet_fcs_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc16_x25_avx.o.tmp obj/crc16_x25_avx.o 00:00:35.298 mv obj/sha_256_mult_avx.o.tmp obj/sha_256_mult_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/sha512_x2_avx.o.tmp obj/sha512_x2_avx.o 00:00:35.298 mv obj/mb_mgr_hmac_md5_flush_avx.o.tmp obj/mb_mgr_hmac_md5_flush_avx.o 00:00:35.298 nasm -MD obj/crc32_sctp_avx.d -MT obj/crc32_sctp_avx.o -o obj/crc32_sctp_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_sctp_avx.asm 00:00:35.298 mv obj/ethernet_fcs_avx.o.tmp obj/ethernet_fcs_avx.o 00:00:35.298 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_avx.o.tmp obj/mb_mgr_aes_xcbc_submit_avx.o 00:00:35.298 mv obj/crc16_x25_avx.o.tmp obj/crc16_x25_avx.o 00:00:35.298 nasm -MD obj/crc32_lte_avx.d -MT obj/crc32_lte_avx.o -o obj/crc32_lte_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_lte_avx.asm 00:00:35.298 mv obj/sha512_x2_avx.o.tmp obj/sha512_x2_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx.o.tmp obj/mb_mgr_hmac_flush_avx.o 00:00:35.298 nasm -MD obj/crc32_fp_avx.d -MT obj/crc32_fp_avx.o -o obj/crc32_fp_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_fp_avx.asm 00:00:35.298 mv obj/mb_mgr_aes_xcbc_submit_avx.o.tmp obj/mb_mgr_aes_xcbc_submit_avx.o 00:00:35.298 nasm -MD obj/crc32_iuup_avx.d -MT obj/crc32_iuup_avx.o -o obj/crc32_iuup_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_iuup_avx.asm 00:00:35.298 nasm -MD obj/crc32_wimax_avx.d -MT obj/crc32_wimax_avx.o -o obj/crc32_wimax_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_wimax_avx.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/aes_ecb_by4_avx.o.tmp obj/aes_ecb_by4_avx.o 00:00:35.298 mv obj/mb_mgr_hmac_flush_avx.o.tmp obj/mb_mgr_hmac_flush_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc32_sctp_avx.o.tmp obj/crc32_sctp_avx.o 00:00:35.298 nasm -MD obj/chacha20_avx.d -MT obj/chacha20_avx.o -o obj/chacha20_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/chacha20_avx.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x4_no_aesni.o.tmp obj/aes256_cbc_mac_x4_no_aesni.o 00:00:35.298 nasm -MD obj/memcpy_avx.d -MT obj/memcpy_avx.o -o obj/memcpy_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/memcpy_avx.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x8.o.tmp obj/aes_cbcs_1_9_enc_128_x8.o 00:00:35.298 mv obj/aes_ecb_by4_avx.o.tmp obj/aes_ecb_by4_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc32_lte_avx.o.tmp obj/crc32_lte_avx.o 00:00:35.298 mv obj/crc32_sctp_avx.o.tmp obj/crc32_sctp_avx.o 00:00:35.298 nasm -MD obj/gcm128_avx_gen2.d -MT obj/gcm128_avx_gen2.o -o obj/gcm128_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm128_avx_gen2.asm 00:00:35.298 mv obj/aes256_cbc_mac_x4_no_aesni.o.tmp obj/aes256_cbc_mac_x4_no_aesni.o 00:00:35.298 nasm -MD obj/gcm192_avx_gen2.d -MT obj/gcm192_avx_gen2.o -o obj/gcm192_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm192_avx_gen2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_avx.o.tmp obj/aes192_cntr_by8_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc32_fp_avx.o.tmp obj/crc32_fp_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc32_by8_avx.o.tmp obj/crc32_by8_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc32_iuup_avx.o.tmp obj/crc32_iuup_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc32_wimax_avx.o.tmp obj/crc32_wimax_avx.o 00:00:35.298 mv obj/aes_cbcs_1_9_enc_128_x8.o.tmp obj/aes_cbcs_1_9_enc_128_x8.o 00:00:35.298 mv obj/crc32_lte_avx.o.tmp obj/crc32_lte_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx.o 00:00:35.298 mv obj/aes192_cntr_by8_avx.o.tmp obj/aes192_cntr_by8_avx.o 00:00:35.298 mv obj/crc32_fp_avx.o.tmp obj/crc32_fp_avx.o 00:00:35.298 mv obj/crc32_by8_avx.o.tmp obj/crc32_by8_avx.o 00:00:35.298 mv obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o 00:00:35.298 mv obj/crc32_iuup_avx.o.tmp obj/crc32_iuup_avx.o 00:00:35.298 mv obj/mb_mgr_hmac_sha_256_flush_avx.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx.o 00:00:35.298 mv obj/crc32_wimax_avx.o.tmp obj/crc32_wimax_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/memcpy_avx.o.tmp obj/memcpy_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_avx.o.tmp obj/aes128_cntr_by8_avx.o 00:00:35.298 mv obj/mb_mgr_hmac_sha_224_flush_avx.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/crc32_refl_by8_avx.o.tmp obj/crc32_refl_by8_avx.o 00:00:35.298 nasm -MD obj/gcm256_avx_gen2.d -MT obj/gcm256_avx_gen2.o -o obj/gcm256_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm256_avx_gen2.asm 00:00:35.298 mv obj/mb_mgr_hmac_sha_256_submit_avx.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx.o 00:00:35.298 mv obj/memcpy_avx.o.tmp obj/memcpy_avx.o 00:00:35.298 mv obj/aes128_cntr_by8_avx.o.tmp obj/aes128_cntr_by8_avx.o 00:00:35.298 mv obj/crc32_refl_by8_avx.o.tmp obj/crc32_refl_by8_avx.o 00:00:35.298 nasm -MD obj/md5_x8x2_avx2.d -MT obj/md5_x8x2_avx2.o -o obj/md5_x8x2_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/md5_x8x2_avx2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx.o 00:00:35.298 nasm -MD obj/sha1_x8_avx2.d -MT obj/sha1_x8_avx2.o -o obj/sha1_x8_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha1_x8_avx2.asm 00:00:35.298 nasm -MD obj/sha256_oct_avx2.d -MT obj/sha256_oct_avx2.o -o obj/sha256_oct_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha256_oct_avx2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by8_avx.o.tmp obj/aes128_cbcs_1_9_dec_by8_avx.o 00:00:35.298 mv obj/mb_mgr_hmac_sha_224_submit_avx.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx.o 00:00:35.298 nasm -MD obj/sha512_x4_avx2.d -MT obj/sha512_x4_avx2.o -o obj/sha512_x4_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha512_x4_avx2.asm 00:00:35.298 nasm -MD obj/zuc_avx2.d -MT obj/zuc_avx2.o -o obj/zuc_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/zuc_avx2.asm 00:00:35.298 mv obj/aes128_cbcs_1_9_dec_by8_avx.o.tmp obj/aes128_cbcs_1_9_dec_by8_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_hmac_md5_flush_avx2.d -MT obj/mb_mgr_hmac_md5_flush_avx2.o -o obj/mb_mgr_hmac_md5_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_md5_flush_avx2.asm 00:00:35.298 nasm -MD obj/mb_mgr_hmac_md5_submit_avx2.d -MT obj/mb_mgr_hmac_md5_submit_avx2.o -o obj/mb_mgr_hmac_md5_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_md5_submit_avx2.asm 00:00:35.298 nasm -MD obj/mb_mgr_hmac_flush_avx2.d -MT obj/mb_mgr_hmac_flush_avx2.o -o obj/mb_mgr_hmac_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_flush_avx2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx.o.tmp obj/mb_mgr_hmac_submit_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_hmac_submit_avx2.d -MT obj/mb_mgr_hmac_submit_avx2.o -o obj/mb_mgr_hmac_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_submit_avx2.asm 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx2.d -MT obj/mb_mgr_hmac_sha_224_flush_avx2.o -o obj/mb_mgr_hmac_sha_224_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_224_flush_avx2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx2.d -MT obj/mb_mgr_hmac_sha_224_submit_avx2.o -o obj/mb_mgr_hmac_sha_224_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_224_submit_avx2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o 00:00:35.298 mv obj/mb_mgr_hmac_submit_avx.o.tmp obj/mb_mgr_hmac_submit_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx2.d -MT obj/mb_mgr_hmac_sha_256_flush_avx2.o -o obj/mb_mgr_hmac_sha_256_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_256_flush_avx2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes_cmac_submit_flush_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx.o 00:00:35.298 mv obj/mb_mgr_hmac_sha_512_flush_avx.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_avx.o 00:00:35.298 mv obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o 00:00:35.298 mv obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx2.d -MT obj/mb_mgr_hmac_sha_256_submit_avx2.o -o obj/mb_mgr_hmac_sha_256_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_256_submit_avx2.asm 00:00:35.298 mv obj/mb_mgr_aes_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes_cmac_submit_flush_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx2.d -MT obj/mb_mgr_hmac_sha_384_flush_avx2.o -o obj/mb_mgr_hmac_sha_384_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_384_flush_avx2.asm 00:00:35.298 mv obj/mb_mgr_hmac_sha_384_flush_avx.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx.o 00:00:35.298 mv obj/mb_mgr_aes256_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx2.d -MT obj/mb_mgr_hmac_sha_384_submit_avx2.o -o obj/mb_mgr_hmac_sha_384_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_384_submit_avx2.asm 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx2.d -MT obj/mb_mgr_hmac_sha_512_flush_avx2.o -o obj/mb_mgr_hmac_sha_512_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_512_flush_avx2.asm 00:00:35.298 nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx2.d -MT obj/mb_mgr_hmac_sha_512_submit_avx2.o -o obj/mb_mgr_hmac_sha_512_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_512_submit_avx2.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx.o 00:00:35.298 nasm -MD obj/mb_mgr_zuc_submit_flush_avx2.d -MT obj/mb_mgr_zuc_submit_flush_avx2.o -o obj/mb_mgr_zuc_submit_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_zuc_submit_flush_avx2.asm 00:00:35.298 nasm -MD obj/chacha20_avx2.d -MT obj/chacha20_avx2.o -o obj/chacha20_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/chacha20_avx2.asm 00:00:35.298 nasm -MD obj/gcm128_avx_gen4.d -MT obj/gcm128_avx_gen4.o -o obj/gcm128_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm128_avx_gen4.asm 00:00:35.298 nasm -MD obj/gcm192_avx_gen4.d -MT obj/gcm192_avx_gen4.o -o obj/gcm192_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm192_avx_gen4.asm 00:00:35.298 mv obj/mb_mgr_hmac_sha_384_submit_avx.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx.o 00:00:35.298 nasm -MD obj/gcm256_avx_gen4.d -MT obj/gcm256_avx_gen4.o -o obj/gcm256_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm256_avx_gen4.asm 00:00:35.298 nasm -MD obj/sha1_x16_avx512.d -MT obj/sha1_x16_avx512.o -o obj/sha1_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha1_x16_avx512.asm 00:00:35.298 nasm -MD obj/sha256_x16_avx512.d -MT obj/sha256_x16_avx512.o -o obj/sha256_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha256_x16_avx512.asm 00:00:35.298 ld -r -z ibt -z shstk -o obj/md5_x4x2_avx.o.tmp obj/md5_x4x2_avx.o 00:00:35.298 ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_avx.o.tmp obj/aes256_cntr_by8_avx.o 00:00:35.298 nasm -MD obj/sha512_x8_avx512.d -MT obj/sha512_x8_avx512.o -o obj/sha512_x8_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha512_x8_avx512.asm 00:00:35.298 nasm -MD obj/des_x16_avx512.d -MT obj/des_x16_avx512.o -o obj/des_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/des_x16_avx512.asm 00:00:35.299 nasm -MD obj/cntr_vaes_avx512.d -MT obj/cntr_vaes_avx512.o -o obj/cntr_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/cntr_vaes_avx512.asm 00:00:35.299 nasm -MD obj/cntr_ccm_vaes_avx512.d -MT obj/cntr_ccm_vaes_avx512.o -o obj/cntr_ccm_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/cntr_ccm_vaes_avx512.asm 00:00:35.299 mv obj/md5_x4x2_avx.o.tmp obj/md5_x4x2_avx.o 00:00:35.299 mv obj/aes256_cntr_by8_avx.o.tmp obj/aes256_cntr_by8_avx.o 00:00:35.299 nasm -MD obj/aes_cbc_dec_vaes_avx512.d -MT obj/aes_cbc_dec_vaes_avx512.o -o obj/aes_cbc_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbc_dec_vaes_avx512.asm 00:00:35.299 nasm -MD obj/aes_cbc_enc_vaes_avx512.d -MT obj/aes_cbc_enc_vaes_avx512.o -o obj/aes_cbc_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbc_enc_vaes_avx512.asm 00:00:35.299 nasm -MD obj/aes_cbcs_enc_vaes_avx512.d -MT obj/aes_cbcs_enc_vaes_avx512.o -o obj/aes_cbcs_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbcs_enc_vaes_avx512.asm 00:00:35.299 nasm -MD obj/aes_cbcs_dec_vaes_avx512.d -MT obj/aes_cbcs_dec_vaes_avx512.o -o obj/aes_cbcs_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbcs_dec_vaes_avx512.asm 00:00:35.299 nasm -MD obj/aes_docsis_dec_avx512.d -MT obj/aes_docsis_dec_avx512.o -o obj/aes_docsis_dec_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_dec_avx512.asm 00:00:35.299 nasm -MD obj/aes_docsis_enc_avx512.d -MT obj/aes_docsis_enc_avx512.o -o obj/aes_docsis_enc_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_enc_avx512.asm 00:00:35.299 nasm -MD obj/aes_docsis_dec_vaes_avx512.d -MT obj/aes_docsis_dec_vaes_avx512.o -o obj/aes_docsis_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_dec_vaes_avx512.asm 00:00:35.299 nasm -MD obj/aes_docsis_enc_vaes_avx512.d -MT obj/aes_docsis_enc_vaes_avx512.o -o obj/aes_docsis_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_enc_vaes_avx512.asm 00:00:35.299 nasm -MD obj/zuc_avx512.d -MT obj/zuc_avx512.o -o obj/zuc_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/zuc_avx512.asm 00:00:35.299 nasm -MD obj/mb_mgr_aes_submit_avx512.d -MT obj/mb_mgr_aes_submit_avx512.o -o obj/mb_mgr_aes_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_submit_avx512.asm 00:00:35.299 nasm -MD obj/mb_mgr_aes_flush_avx512.d -MT obj/mb_mgr_aes_flush_avx512.o -o obj/mb_mgr_aes_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_flush_avx512.asm 00:00:35.299 nasm -MD obj/mb_mgr_aes192_submit_avx512.d -MT obj/mb_mgr_aes192_submit_avx512.o -o obj/mb_mgr_aes192_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes192_submit_avx512.asm 00:00:35.299 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_avx2.o.tmp obj/mb_mgr_hmac_md5_flush_avx2.o 00:00:35.299 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_avx2.o.tmp obj/mb_mgr_hmac_md5_submit_avx2.o 00:00:35.299 nasm -MD obj/mb_mgr_aes192_flush_avx512.d -MT obj/mb_mgr_aes192_flush_avx512.o -o obj/mb_mgr_aes192_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes192_flush_avx512.asm 00:00:35.299 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx2.o 00:00:35.299 mv obj/mb_mgr_hmac_md5_flush_avx2.o.tmp obj/mb_mgr_hmac_md5_flush_avx2.o 00:00:35.299 mv obj/mb_mgr_hmac_md5_submit_avx2.o.tmp obj/mb_mgr_hmac_md5_submit_avx2.o 00:00:35.299 nasm -MD obj/mb_mgr_aes256_submit_avx512.d -MT obj/mb_mgr_aes256_submit_avx512.o -o obj/mb_mgr_aes256_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_submit_avx512.asm 00:00:35.299 mv obj/mb_mgr_hmac_sha_256_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx2.o 00:00:35.299 nasm -MD obj/mb_mgr_aes256_flush_avx512.d -MT obj/mb_mgr_aes256_flush_avx512.o -o obj/mb_mgr_aes256_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_flush_avx512.asm 00:00:35.299 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx2.o.tmp obj/mb_mgr_hmac_flush_avx2.o 00:00:35.299 nasm -MD obj/mb_mgr_hmac_flush_avx512.d -MT obj/mb_mgr_hmac_flush_avx512.o -o obj/mb_mgr_hmac_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_flush_avx512.asm 00:00:35.299 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbc_dec_by4_sse_no_aesni.o 00:00:35.299 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx2.o 00:00:35.299 mv obj/mb_mgr_hmac_flush_avx2.o.tmp obj/mb_mgr_hmac_flush_avx2.o 00:00:35.299 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx2.o 00:00:35.299 mv obj/aes128_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbc_dec_by4_sse_no_aesni.o 00:00:35.299 mv obj/mb_mgr_hmac_sha_224_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx2.o 00:00:35.299 nasm -MD obj/mb_mgr_hmac_submit_avx512.d -MT obj/mb_mgr_hmac_submit_avx512.o -o obj/mb_mgr_hmac_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_submit_avx512.asm 00:00:35.560 ld -r -z ibt -z shstk -o obj/sha1_x8_avx2.o.tmp obj/sha1_x8_avx2.o 00:00:35.560 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx2.o 00:00:35.560 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx2.o 00:00:35.560 mv obj/mb_mgr_hmac_sha_384_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx2.o 00:00:35.560 nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx512.d -MT obj/mb_mgr_hmac_sha_224_flush_avx512.o -o obj/mb_mgr_hmac_sha_224_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_224_flush_avx512.asm 00:00:35.560 nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx512.d -MT obj/mb_mgr_hmac_sha_224_submit_avx512.o -o obj/mb_mgr_hmac_sha_224_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_224_submit_avx512.asm 00:00:35.560 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx2.o.tmp obj/mb_mgr_hmac_submit_avx2.o 00:00:35.560 mv obj/sha1_x8_avx2.o.tmp obj/sha1_x8_avx2.o 00:00:35.560 mv obj/mb_mgr_hmac_sha_512_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx2.o 00:00:35.560 mv obj/mb_mgr_hmac_sha_384_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx2.o 00:00:35.560 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx.o.tmp obj/mb_mgr_zuc_submit_flush_avx.o 00:00:35.560 nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx512.d -MT obj/mb_mgr_hmac_sha_256_flush_avx512.o -o obj/mb_mgr_hmac_sha_256_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_256_flush_avx512.asm 00:00:35.560 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx2.o 00:00:35.560 mv obj/mb_mgr_hmac_submit_avx2.o.tmp obj/mb_mgr_hmac_submit_avx2.o 00:00:35.560 mv obj/mb_mgr_zuc_submit_flush_avx.o.tmp obj/mb_mgr_zuc_submit_flush_avx.o 00:00:35.560 nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx512.d -MT obj/mb_mgr_hmac_sha_256_submit_avx512.o -o obj/mb_mgr_hmac_sha_256_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_256_submit_avx512.asm 00:00:35.560 nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx512.d -MT obj/mb_mgr_hmac_sha_384_flush_avx512.o -o obj/mb_mgr_hmac_sha_384_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_384_flush_avx512.asm 00:00:35.560 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx2.o 00:00:35.560 mv obj/mb_mgr_hmac_sha_224_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx2.o 00:00:35.561 nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx512.d -MT obj/mb_mgr_hmac_sha_384_submit_avx512.o -o obj/mb_mgr_hmac_sha_384_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_384_submit_avx512.asm 00:00:35.561 nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx512.d -MT obj/mb_mgr_hmac_sha_512_flush_avx512.o -o obj/mb_mgr_hmac_sha_512_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_512_flush_avx512.asm 00:00:35.561 nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx512.d -MT obj/mb_mgr_hmac_sha_512_submit_avx512.o -o obj/mb_mgr_hmac_sha_512_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_512_submit_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_avx512.o.tmp obj/mb_mgr_aes256_submit_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_256_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx2.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_avx512.o.tmp obj/mb_mgr_aes_submit_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx2.o 00:00:35.561 nasm -MD obj/mb_mgr_des_avx512.d -MT obj/mb_mgr_des_avx512.o -o obj/mb_mgr_des_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_des_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/aes_cbcs_enc_vaes_avx512.o.tmp obj/aes_cbcs_enc_vaes_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_avx512.o.tmp obj/mb_mgr_aes192_submit_avx512.o 00:00:35.561 mv obj/mb_mgr_aes256_submit_avx512.o.tmp obj/mb_mgr_aes256_submit_avx512.o 00:00:35.561 mv obj/mb_mgr_aes_submit_avx512.o.tmp obj/mb_mgr_aes_submit_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_512_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx2.o 00:00:35.561 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cmac_submit_flush_vaes_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/sha256_oct_avx2.o.tmp obj/sha256_oct_avx2.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/sha512_x4_avx2.o.tmp obj/sha512_x4_avx2.o 00:00:35.561 mv obj/aes_cbcs_enc_vaes_avx512.o.tmp obj/aes_cbcs_enc_vaes_avx512.o 00:00:35.561 mv obj/mb_mgr_aes192_submit_avx512.o.tmp obj/mb_mgr_aes192_submit_avx512.o 00:00:35.561 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.asm 00:00:35.561 mv obj/sha256_oct_avx2.o.tmp obj/sha256_oct_avx2.o 00:00:35.561 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.asm 00:00:35.561 mv obj/sha512_x4_avx2.o.tmp obj/sha512_x4_avx2.o 00:00:35.561 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.asm 00:00:35.561 nasm -MD obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.asm 00:00:35.561 nasm -MD obj/mb_mgr_zuc_submit_flush_avx512.d -MT obj/mb_mgr_zuc_submit_flush_avx512.o -o obj/mb_mgr_zuc_submit_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_zuc_submit_flush_avx512.asm 00:00:35.561 nasm -MD obj/mb_mgr_zuc_submit_flush_gfni_avx512.d -MT obj/mb_mgr_zuc_submit_flush_gfni_avx512.o -o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_zuc_submit_flush_gfni_avx512.asm 00:00:35.561 nasm -MD obj/chacha20_avx512.d -MT obj/chacha20_avx512.o -o obj/chacha20_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/chacha20_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx512.o.tmp obj/mb_mgr_hmac_flush_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/pon_avx.o.tmp obj/pon_avx.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_flush_avx512.o.tmp obj/mb_mgr_hmac_flush_avx512.o 00:00:35.561 nasm -MD obj/poly_avx512.d -MT obj/poly_avx512.o -o obj/poly_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/poly_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/sha1_x16_avx512.o.tmp obj/sha1_x16_avx512.o 00:00:35.561 mv obj/pon_avx.o.tmp obj/pon_avx.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_224_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx512.o 00:00:35.561 nasm -MD obj/poly_fma_avx512.d -MT obj/poly_fma_avx512.o -o obj/poly_fma_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/poly_fma_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes192_cbc_dec_by4_sse_no_aesni.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_512_submit_avx.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx.o 00:00:35.561 mv obj/sha1_x16_avx512.o.tmp obj/sha1_x16_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx512.o.tmp obj/mb_mgr_hmac_submit_avx512.o 00:00:35.561 nasm -MD obj/ethernet_fcs_avx512.d -MT obj/ethernet_fcs_avx512.o -o obj/ethernet_fcs_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/ethernet_fcs_avx512.asm 00:00:35.561 nasm -MD obj/crc16_x25_avx512.d -MT obj/crc16_x25_avx512.o -o obj/crc16_x25_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc16_x25_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_avx512.o.tmp obj/mb_mgr_aes_flush_avx512.o 00:00:35.561 mv obj/aes192_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes192_cbc_dec_by4_sse_no_aesni.o 00:00:35.561 nasm -MD obj/crc32_refl_by16_vclmul_avx512.d -MT obj/crc32_refl_by16_vclmul_avx512.o -o obj/crc32_refl_by16_vclmul_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_refl_by16_vclmul_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/sha512_x8_avx512.o.tmp obj/sha512_x8_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_submit_avx512.o.tmp obj/mb_mgr_hmac_submit_avx512.o 00:00:35.561 mv obj/mb_mgr_aes_flush_avx512.o.tmp obj/mb_mgr_aes_flush_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/ethernet_fcs_avx512.o.tmp obj/ethernet_fcs_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_avx512.o.tmp obj/mb_mgr_aes192_flush_avx512.o 00:00:35.561 nasm -MD obj/crc32_by16_vclmul_avx512.d -MT obj/crc32_by16_vclmul_avx512.o -o obj/crc32_by16_vclmul_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_by16_vclmul_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/crc16_x25_avx512.o.tmp obj/crc16_x25_avx512.o 00:00:35.561 mv obj/sha512_x8_avx512.o.tmp obj/sha512_x8_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_384_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_512_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx512.o 00:00:35.561 mv obj/ethernet_fcs_avx512.o.tmp obj/ethernet_fcs_avx512.o 00:00:35.561 nasm -MD obj/mb_mgr_aes_cbcs_1_9_submit_avx512.d -MT obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o -o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cbcs_1_9_submit_avx512.asm 00:00:35.561 mv obj/mb_mgr_hmac_sha_512_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx512.o 00:00:35.561 mv obj/crc16_x25_avx512.o.tmp obj/crc16_x25_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_256_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx512.o 00:00:35.561 nasm -MD obj/mb_mgr_aes_cbcs_1_9_flush_avx512.d -MT obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o -o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cbcs_1_9_flush_avx512.asm 00:00:35.561 mv obj/mb_mgr_aes192_flush_avx512.o.tmp obj/mb_mgr_aes192_flush_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/crc32_refl_by16_vclmul_avx512.o.tmp obj/crc32_refl_by16_vclmul_avx512.o 00:00:35.561 nasm -MD obj/crc32_sctp_avx512.d -MT obj/crc32_sctp_avx512.o -o obj/crc32_sctp_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_sctp_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/sha256_x16_avx512.o.tmp obj/sha256_x16_avx512.o 00:00:35.561 mv obj/crc32_refl_by16_vclmul_avx512.o.tmp obj/crc32_refl_by16_vclmul_avx512.o 00:00:35.561 nasm -MD obj/crc32_lte_avx512.d -MT obj/crc32_lte_avx512.o -o obj/crc32_lte_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_lte_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx512.o 00:00:35.561 nasm -MD obj/crc32_fp_avx512.d -MT obj/crc32_fp_avx512.o -o obj/crc32_fp_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_fp_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx512.o 00:00:35.561 mv obj/sha256_x16_avx512.o.tmp obj/sha256_x16_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/crc32_by16_vclmul_avx512.o.tmp obj/crc32_by16_vclmul_avx512.o 00:00:35.561 nasm -MD obj/crc32_iuup_avx512.d -MT obj/crc32_iuup_avx512.o -o obj/crc32_iuup_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_iuup_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/crc32_lte_avx512.o.tmp obj/crc32_lte_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_224_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx512.o 00:00:35.561 nasm -MD obj/crc32_wimax_avx512.d -MT obj/crc32_wimax_avx512.o -o obj/crc32_wimax_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_wimax_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/crc32_sctp_avx512.o.tmp obj/crc32_sctp_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_384_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx512.o 00:00:35.561 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx512.o 00:00:35.561 mv obj/crc32_by16_vclmul_avx512.o.tmp obj/crc32_by16_vclmul_avx512.o 00:00:35.561 mv obj/crc32_lte_avx512.o.tmp obj/crc32_lte_avx512.o 00:00:35.561 nasm -MD obj/gcm128_vaes_avx512.d -MT obj/gcm128_vaes_avx512.o -o obj/gcm128_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm128_vaes_avx512.asm 00:00:35.561 nasm -MD obj/gcm192_vaes_avx512.d -MT obj/gcm192_vaes_avx512.o -o obj/gcm192_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm192_vaes_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/crc32_wimax_avx512.o.tmp obj/crc32_wimax_avx512.o 00:00:35.561 mv obj/crc32_sctp_avx512.o.tmp obj/crc32_sctp_avx512.o 00:00:35.561 mv obj/mb_mgr_hmac_sha_256_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx512.o 00:00:35.561 nasm -MD obj/gcm256_vaes_avx512.d -MT obj/gcm256_vaes_avx512.o -o obj/gcm256_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm256_vaes_avx512.asm 00:00:35.561 ld -r -z ibt -z shstk -o obj/crc32_iuup_avx512.o.tmp obj/crc32_iuup_avx512.o 00:00:35.561 nasm -MD obj/gcm128_avx512.d -MT obj/gcm128_avx512.o -o obj/gcm128_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm128_avx512.asm 00:00:35.561 mv obj/crc32_wimax_avx512.o.tmp obj/crc32_wimax_avx512.o 00:00:35.561 nasm -MD obj/gcm192_avx512.d -MT obj/gcm192_avx512.o -o obj/gcm192_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm192_avx512.asm 00:00:35.561 mv obj/crc32_iuup_avx512.o.tmp obj/crc32_iuup_avx512.o 00:00:35.561 nasm -MD obj/gcm256_avx512.d -MT obj/gcm256_avx512.o -o obj/gcm256_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm256_avx512.asm 00:00:35.562 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/mb_mgr_avx.c -o obj/mb_mgr_avx.o 00:00:35.562 gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/mb_mgr_avx2.c -o obj/mb_mgr_avx2.o 00:00:35.562 gcc -MMD -march=broadwell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx512/mb_mgr_avx512.c -o obj/mb_mgr_avx512.o 00:00:35.562 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/mb_mgr_sse.c -o obj/mb_mgr_sse.o 00:00:35.562 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/mb_mgr_sse_no_aesni.c -o obj/mb_mgr_sse_no_aesni.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/alloc.c -o obj/alloc.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/aes_xcbc_expand_key.c -o obj/aes_xcbc_expand_key.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/crc32_fp_avx512.o.tmp obj/crc32_fp_avx512.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_des_avx512.o.tmp obj/mb_mgr_des_avx512.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_avx512.o.tmp obj/mb_mgr_aes256_flush_avx512.o 00:00:35.562 mv obj/mb_mgr_des_avx512.o.tmp obj/mb_mgr_des_avx512.o 00:00:35.562 mv obj/mb_mgr_aes256_flush_avx512.o.tmp obj/mb_mgr_aes256_flush_avx512.o 00:00:35.562 mv obj/crc32_fp_avx512.o.tmp obj/crc32_fp_avx512.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/md5_one_block.c -o obj/md5_one_block.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_avx512.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_avx512.o 00:00:35.562 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/sha_sse.c -o obj/sha_sse.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o 00:00:35.562 mv obj/mb_mgr_zuc_submit_flush_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_avx512.o 00:00:35.562 mv obj/mb_mgr_zuc_submit_flush_gfni_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_avx512.o 00:00:35.562 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/sha_avx.c -o obj/sha_avx.o 00:00:35.562 mv obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/des_key.c -o obj/des_key.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/des_basic.c -o obj/des_basic.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/version.c -o obj/version.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o 00:00:35.562 mv obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/cpu_feature.c -o obj/cpu_feature.o 00:00:35.562 mv obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o 00:00:35.562 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/aesni_emu.c -o obj/aesni_emu.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/md5_x8x2_avx2.o.tmp obj/md5_x8x2_avx2.o 00:00:35.562 mv obj/md5_x8x2_avx2.o.tmp obj/md5_x8x2_avx2.o 00:00:35.562 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/kasumi_avx.c -o obj/kasumi_avx.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/poly_avx512.o.tmp obj/poly_avx512.o 00:00:35.562 mv obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o 00:00:35.562 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/kasumi_iv.c -o obj/kasumi_iv.o 00:00:35.562 mv obj/poly_avx512.o.tmp obj/poly_avx512.o 00:00:35.562 mv obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/poly_fma_avx512.o.tmp obj/poly_fma_avx512.o 00:00:35.562 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/kasumi_sse.c -o obj/kasumi_sse.o 00:00:35.562 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/zuc_sse_top.c -o obj/zuc_sse_top.o 00:00:35.562 mv obj/poly_fma_avx512.o.tmp obj/poly_fma_avx512.o 00:00:35.562 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/zuc_sse_no_aesni_top.c -o obj/zuc_sse_no_aesni_top.o 00:00:35.562 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/zuc_avx_top.c -o obj/zuc_avx_top.o 00:00:35.562 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o 00:00:35.562 gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/zuc_avx2_top.c -o obj/zuc_avx2_top.o 00:00:35.562 mv obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o 00:00:35.563 gcc -MMD -march=broadwell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx512/zuc_avx512_top.c -o obj/zuc_avx512_top.o 00:00:35.821 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/zuc_iv.c -o obj/zuc_iv.o 00:00:35.821 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/snow3g_sse.c -o obj/snow3g_sse.o 00:00:35.821 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o 00:00:35.821 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/snow3g_sse_no_aesni.c -o obj/snow3g_sse_no_aesni.o 00:00:35.821 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o 00:00:35.821 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/snow3g_avx.c -o obj/snow3g_avx.o 00:00:35.821 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes256_cbc_dec_by4_sse_no_aesni.o 00:00:35.821 ld -r -z ibt -z shstk -o obj/zuc_common.o.tmp obj/zuc_common.o 00:00:35.821 mv obj/aes256_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes256_cbc_dec_by4_sse_no_aesni.o 00:00:35.821 mv obj/zuc_common.o.tmp obj/zuc_common.o 00:00:35.821 gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/snow3g_avx2.c -o obj/snow3g_avx2.o 00:00:35.821 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/snow3g_tables.c -o obj/snow3g_tables.o 00:00:35.821 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/snow3g_iv.c -o obj/snow3g_iv.o 00:00:36.081 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_vaes_avx512.o.tmp obj/aes_cbc_enc_vaes_avx512.o 00:00:36.081 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x4_no_aesni.o.tmp obj/aes128_cbc_mac_x4_no_aesni.o 00:00:36.081 mv obj/aes_cbc_enc_vaes_avx512.o.tmp obj/aes_cbc_enc_vaes_avx512.o 00:00:36.081 mv obj/aes128_cbc_mac_x4_no_aesni.o.tmp obj/aes128_cbc_mac_x4_no_aesni.o 00:00:36.081 nasm -MD obj/snow_v_sse.d -MT obj/snow_v_sse.o -o obj/snow_v_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/snow_v_sse.asm 00:00:36.081 nasm -MD obj/snow_v_sse_noaesni.d -MT obj/snow_v_sse_noaesni.o -o obj/snow_v_sse_noaesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/snow_v_sse_noaesni.asm 00:00:36.081 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx2.o.tmp obj/mb_mgr_zuc_submit_flush_avx2.o 00:00:36.081 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/mb_mgr_auto.c -o obj/mb_mgr_auto.o 00:00:36.081 mv obj/mb_mgr_zuc_submit_flush_avx2.o.tmp obj/mb_mgr_zuc_submit_flush_avx2.o 00:00:36.081 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/error.c -o obj/error.o 00:00:36.081 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/gcm.c -o obj/gcm.o 00:00:36.081 ld -r -z ibt -z shstk -o obj/snow_v_sse_noaesni.o.tmp obj/snow_v_sse_noaesni.o 00:00:36.081 mv obj/snow_v_sse_noaesni.o.tmp obj/snow_v_sse_noaesni.o 00:00:36.081 ld -r -z ibt -z shstk -o obj/snow_v_sse.o.tmp obj/snow_v_sse.o 00:00:36.081 mv obj/snow_v_sse.o.tmp obj/snow_v_sse.o 00:00:36.081 ld -r -z ibt -z shstk -o obj/aes_cbcs_dec_vaes_avx512.o.tmp obj/aes_cbcs_dec_vaes_avx512.o 00:00:36.081 mv obj/aes_cbcs_dec_vaes_avx512.o.tmp obj/aes_cbcs_dec_vaes_avx512.o 00:00:36.343 ld -r -z ibt -z shstk -o obj/zuc_sse_gfni.o.tmp obj/zuc_sse_gfni.o 00:00:36.343 mv obj/zuc_sse_gfni.o.tmp obj/zuc_sse_gfni.o 00:00:36.343 ld -r -z ibt -z shstk -o obj/aes_docsis_enc_avx512.o.tmp obj/aes_docsis_enc_avx512.o 00:00:36.343 mv obj/aes_docsis_enc_avx512.o.tmp obj/aes_docsis_enc_avx512.o 00:00:36.602 ld -r -z ibt -z shstk -o obj/chacha20_avx.o.tmp obj/chacha20_avx.o 00:00:36.602 mv obj/chacha20_avx.o.tmp obj/chacha20_avx.o 00:00:36.602 ld -r -z ibt -z shstk -o obj/pon_sse_no_aesni.o.tmp obj/pon_sse_no_aesni.o 00:00:36.602 mv obj/pon_sse_no_aesni.o.tmp obj/pon_sse_no_aesni.o 00:00:36.602 ld -r -z ibt -z shstk -o obj/zuc_sse.o.tmp obj/zuc_sse.o 00:00:36.602 mv obj/zuc_sse.o.tmp obj/zuc_sse.o 00:00:36.861 ld -r -z ibt -z shstk -o obj/aes_docsis_enc_vaes_avx512.o.tmp obj/aes_docsis_enc_vaes_avx512.o 00:00:36.861 mv obj/aes_docsis_enc_vaes_avx512.o.tmp obj/aes_docsis_enc_vaes_avx512.o 00:00:36.861 ld -r -z ibt -z shstk -o obj/chacha20_avx2.o.tmp obj/chacha20_avx2.o 00:00:36.861 mv obj/chacha20_avx2.o.tmp obj/chacha20_avx2.o 00:00:37.120 ld -r -z ibt -z shstk -o obj/aes_docsis_dec_avx512.o.tmp obj/aes_docsis_dec_avx512.o 00:00:37.120 mv obj/aes_docsis_dec_avx512.o.tmp obj/aes_docsis_dec_avx512.o 00:00:37.120 ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o 00:00:37.120 mv obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o 00:00:37.379 ld -r -z ibt -z shstk -o obj/aes_cbc_dec_vaes_avx512.o.tmp obj/aes_cbc_dec_vaes_avx512.o 00:00:37.379 mv obj/aes_cbc_dec_vaes_avx512.o.tmp obj/aes_cbc_dec_vaes_avx512.o 00:00:37.379 ld -r -z ibt -z shstk -o obj/zuc_avx.o.tmp obj/zuc_avx.o 00:00:37.379 mv obj/zuc_avx.o.tmp obj/zuc_avx.o 00:00:37.946 ld -r -z ibt -z shstk -o obj/gcm128_avx_gen2.o.tmp obj/gcm128_avx_gen2.o 00:00:37.946 mv obj/gcm128_avx_gen2.o.tmp obj/gcm128_avx_gen2.o 00:00:37.946 ld -r -z ibt -z shstk -o obj/zuc_sse_no_aesni.o.tmp obj/zuc_sse_no_aesni.o 00:00:37.946 mv obj/zuc_sse_no_aesni.o.tmp obj/zuc_sse_no_aesni.o 00:00:37.946 ld -r -z ibt -z shstk -o obj/gcm192_avx512.o.tmp obj/gcm192_avx512.o 00:00:37.946 mv obj/gcm192_avx512.o.tmp obj/gcm192_avx512.o 00:00:37.946 ld -r -z ibt -z shstk -o obj/chacha20_avx512.o.tmp obj/chacha20_avx512.o 00:00:37.947 ld -r -z ibt -z shstk -o obj/gcm256_sse.o.tmp obj/gcm256_sse.o 00:00:37.947 mv obj/chacha20_avx512.o.tmp obj/chacha20_avx512.o 00:00:37.947 mv obj/gcm256_sse.o.tmp obj/gcm256_sse.o 00:00:37.947 ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes128_cntr_ccm_by8_sse_no_aesni.o 00:00:37.947 ld -r -z ibt -z shstk -o obj/gcm192_avx_gen2.o.tmp obj/gcm192_avx_gen2.o 00:00:37.947 mv obj/aes128_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes128_cntr_ccm_by8_sse_no_aesni.o 00:00:37.947 mv obj/gcm192_avx_gen2.o.tmp obj/gcm192_avx_gen2.o 00:00:38.205 ld -r -z ibt -z shstk -o obj/gcm128_sse.o.tmp obj/gcm128_sse.o 00:00:38.205 mv obj/gcm128_sse.o.tmp obj/gcm128_sse.o 00:00:38.205 ld -r -z ibt -z shstk -o obj/gcm192_sse.o.tmp obj/gcm192_sse.o 00:00:38.205 mv obj/gcm192_sse.o.tmp obj/gcm192_sse.o 00:00:38.463 ld -r -z ibt -z shstk -o obj/gcm256_avx_gen2.o.tmp obj/gcm256_avx_gen2.o 00:00:38.464 mv obj/gcm256_avx_gen2.o.tmp obj/gcm256_avx_gen2.o 00:00:38.464 ld -r -z ibt -z shstk -o obj/gcm128_avx512.o.tmp obj/gcm128_avx512.o 00:00:38.464 mv obj/gcm128_avx512.o.tmp obj/gcm128_avx512.o 00:00:38.464 ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes256_cntr_ccm_by8_sse_no_aesni.o 00:00:38.464 mv obj/aes256_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes256_cntr_ccm_by8_sse_no_aesni.o 00:00:38.721 ld -r -z ibt -z shstk -o obj/zuc_avx512.o.tmp obj/zuc_avx512.o 00:00:38.721 mv obj/zuc_avx512.o.tmp obj/zuc_avx512.o 00:00:38.982 ld -r -z ibt -z shstk -o obj/aes_docsis_dec_vaes_avx512.o.tmp obj/aes_docsis_dec_vaes_avx512.o 00:00:38.982 mv obj/aes_docsis_dec_vaes_avx512.o.tmp obj/aes_docsis_dec_vaes_avx512.o 00:00:38.982 ld -r -z ibt -z shstk -o obj/cntr_ccm_vaes_avx512.o.tmp obj/cntr_ccm_vaes_avx512.o 00:00:38.982 mv obj/cntr_ccm_vaes_avx512.o.tmp obj/cntr_ccm_vaes_avx512.o 00:00:38.982 ld -r -z ibt -z shstk -o obj/gcm256_avx512.o.tmp obj/gcm256_avx512.o 00:00:38.982 mv obj/gcm256_avx512.o.tmp obj/gcm256_avx512.o 00:00:39.918 ld -r -z ibt -z shstk -o obj/gcm256_avx_gen4.o.tmp obj/gcm256_avx_gen4.o 00:00:39.918 mv obj/gcm256_avx_gen4.o.tmp obj/gcm256_avx_gen4.o 00:00:40.176 ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_sse_no_aesni.o.tmp obj/aes128_cntr_by8_sse_no_aesni.o 00:00:40.176 mv obj/aes128_cntr_by8_sse_no_aesni.o.tmp obj/aes128_cntr_by8_sse_no_aesni.o 00:00:40.435 ld -r -z ibt -z shstk -o obj/gcm192_avx_gen4.o.tmp obj/gcm192_avx_gen4.o 00:00:40.435 mv obj/gcm192_avx_gen4.o.tmp obj/gcm192_avx_gen4.o 00:00:41.001 ld -r -z ibt -z shstk -o obj/des_x16_avx512.o.tmp obj/des_x16_avx512.o 00:00:41.001 mv obj/des_x16_avx512.o.tmp obj/des_x16_avx512.o 00:00:41.259 ld -r -z ibt -z shstk -o obj/gcm128_avx_gen4.o.tmp obj/gcm128_avx_gen4.o 00:00:41.259 mv obj/gcm128_avx_gen4.o.tmp obj/gcm128_avx_gen4.o 00:00:41.517 ld -r -z ibt -z shstk -o obj/zuc_avx2.o.tmp obj/zuc_avx2.o 00:00:41.517 mv obj/zuc_avx2.o.tmp obj/zuc_avx2.o 00:00:41.775 ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_sse_no_aesni.o.tmp obj/aes192_cntr_by8_sse_no_aesni.o 00:00:41.775 mv obj/aes192_cntr_by8_sse_no_aesni.o.tmp obj/aes192_cntr_by8_sse_no_aesni.o 00:00:42.341 ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_sse_no_aesni.o.tmp obj/aes256_cntr_by8_sse_no_aesni.o 00:00:42.341 mv obj/aes256_cntr_by8_sse_no_aesni.o.tmp obj/aes256_cntr_by8_sse_no_aesni.o 00:00:42.909 ld -r -z ibt -z shstk -o obj/chacha20_sse.o.tmp obj/chacha20_sse.o 00:00:42.909 mv obj/chacha20_sse.o.tmp obj/chacha20_sse.o 00:00:44.813 ld -r -z ibt -z shstk -o obj/aes_ecb_by4_sse_no_aesni.o.tmp obj/aes_ecb_by4_sse_no_aesni.o 00:00:44.813 mv obj/aes_ecb_by4_sse_no_aesni.o.tmp obj/aes_ecb_by4_sse_no_aesni.o 00:00:47.345 ld -r -z ibt -z shstk -o obj/gcm128_vaes_avx512.o.tmp obj/gcm128_vaes_avx512.o 00:00:47.345 mv obj/gcm128_vaes_avx512.o.tmp obj/gcm128_vaes_avx512.o 00:00:48.507 ld -r -z ibt -z shstk -o obj/gcm192_vaes_avx512.o.tmp obj/gcm192_vaes_avx512.o 00:00:48.507 mv obj/gcm192_vaes_avx512.o.tmp obj/gcm192_vaes_avx512.o 00:00:48.764 ld -r -z ibt -z shstk -o obj/gcm256_vaes_avx512.o.tmp obj/gcm256_vaes_avx512.o 00:00:48.764 mv obj/gcm256_vaes_avx512.o.tmp obj/gcm256_vaes_avx512.o 00:00:58.737 ld -r -z ibt -z shstk -o obj/cntr_vaes_avx512.o.tmp obj/cntr_vaes_avx512.o 00:00:58.737 mv obj/cntr_vaes_avx512.o.tmp obj/cntr_vaes_avx512.o 00:02:06.512 ld -r -z ibt -z shstk -o obj/gcm128_sse_no_aesni.o.tmp obj/gcm128_sse_no_aesni.o 00:02:06.512 mv obj/gcm128_sse_no_aesni.o.tmp obj/gcm128_sse_no_aesni.o 00:02:16.489 ld -r -z ibt -z shstk -o obj/gcm192_sse_no_aesni.o.tmp obj/gcm192_sse_no_aesni.o 00:02:16.489 mv obj/gcm192_sse_no_aesni.o.tmp obj/gcm192_sse_no_aesni.o 00:02:20.671 ld -r -z ibt -z shstk -o obj/gcm256_sse_no_aesni.o.tmp obj/gcm256_sse_no_aesni.o 00:02:20.671 mv obj/gcm256_sse_no_aesni.o.tmp obj/gcm256_sse_no_aesni.o 00:02:20.672 gcc -shared -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -Wl,-soname,libIPSec_MB.so.1 -o libIPSec_MB.so.1.0.0 obj/aes_keyexp_128.o obj/aes_keyexp_192.o obj/aes_keyexp_256.o obj/aes_cmac_subkey_gen.o obj/save_xmms.o obj/clear_regs_mem_fns.o obj/const.o obj/aes128_ecbenc_x3.o obj/zuc_common.o obj/wireless_common.o obj/constant_lookup.o obj/crc32_refl_const.o obj/crc32_const.o obj/poly1305.o obj/chacha20_poly1305.o obj/aes128_cbc_dec_by4_sse_no_aesni.o obj/aes192_cbc_dec_by4_sse_no_aesni.o obj/aes256_cbc_dec_by4_sse_no_aesni.o obj/aes_cbc_enc_128_x4_no_aesni.o obj/aes_cbc_enc_192_x4_no_aesni.o obj/aes_cbc_enc_256_x4_no_aesni.o obj/aes128_cntr_by8_sse_no_aesni.o obj/aes192_cntr_by8_sse_no_aesni.o obj/aes256_cntr_by8_sse_no_aesni.o obj/aes_ecb_by4_sse_no_aesni.o obj/aes128_cntr_ccm_by8_sse_no_aesni.o obj/aes256_cntr_ccm_by8_sse_no_aesni.o obj/pon_sse_no_aesni.o obj/zuc_sse_no_aesni.o obj/aes_cfb_sse_no_aesni.o obj/aes128_cbc_mac_x4_no_aesni.o obj/aes256_cbc_mac_x4_no_aesni.o obj/aes_xcbc_mac_128_x4_no_aesni.o obj/mb_mgr_aes_flush_sse_no_aesni.o obj/mb_mgr_aes_submit_sse_no_aesni.o obj/mb_mgr_aes192_flush_sse_no_aesni.o obj/mb_mgr_aes192_submit_sse_no_aesni.o obj/mb_mgr_aes256_flush_sse_no_aesni.o obj/mb_mgr_aes256_submit_sse_no_aesni.o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o obj/ethernet_fcs_sse_no_aesni.o obj/crc16_x25_sse_no_aesni.o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o obj/crc32_refl_by8_sse_no_aesni.o obj/crc32_by8_sse_no_aesni.o obj/crc32_sctp_sse_no_aesni.o obj/crc32_lte_sse_no_aesni.o obj/crc32_fp_sse_no_aesni.o obj/crc32_iuup_sse_no_aesni.o obj/crc32_wimax_sse_no_aesni.o obj/gcm128_sse_no_aesni.o obj/gcm192_sse_no_aesni.o obj/gcm256_sse_no_aesni.o obj/aes128_cbc_dec_by4_sse.o obj/aes128_cbc_dec_by8_sse.o obj/aes192_cbc_dec_by4_sse.o obj/aes192_cbc_dec_by8_sse.o obj/aes256_cbc_dec_by4_sse.o obj/aes256_cbc_dec_by8_sse.o obj/aes_cbc_enc_128_x4.o obj/aes_cbc_enc_192_x4.o obj/aes_cbc_enc_256_x4.o obj/aes_cbc_enc_128_x8_sse.o obj/aes_cbc_enc_192_x8_sse.o obj/aes_cbc_enc_256_x8_sse.o obj/pon_sse.o obj/aes128_cntr_by8_sse.o obj/aes192_cntr_by8_sse.o obj/aes256_cntr_by8_sse.o obj/aes_ecb_by4_sse.o obj/aes128_cntr_ccm_by8_sse.o obj/aes256_cntr_ccm_by8_sse.o obj/aes_cfb_sse.o obj/aes128_cbc_mac_x4.o obj/aes256_cbc_mac_x4.o obj/aes128_cbc_mac_x8_sse.o obj/aes256_cbc_mac_x8_sse.o obj/aes_xcbc_mac_128_x4.o obj/md5_x4x2_sse.o obj/sha1_mult_sse.o obj/sha1_one_block_sse.o obj/sha224_one_block_sse.o obj/sha256_one_block_sse.o obj/sha384_one_block_sse.o obj/sha512_one_block_sse.o obj/sha512_x2_sse.o obj/sha_256_mult_sse.o obj/sha1_ni_x2_sse.o obj/sha256_ni_x2_sse.o obj/zuc_sse.o obj/zuc_sse_gfni.o obj/mb_mgr_aes_flush_sse.o obj/mb_mgr_aes_submit_sse.o obj/mb_mgr_aes192_flush_sse.o obj/mb_mgr_aes192_submit_sse.o obj/mb_mgr_aes256_flush_sse.o obj/mb_mgr_aes256_submit_sse.o obj/mb_mgr_aes_flush_sse_x8.o obj/mb_mgr_aes_submit_sse_x8.o obj/mb_mgr_aes192_flush_sse_x8.o obj/mb_mgr_aes192_submit_sse_x8.o obj/mb_mgr_aes256_flush_sse_x8.o obj/mb_mgr_aes256_submit_sse_x8.o obj/mb_mgr_aes_cmac_submit_flush_sse.o obj/mb_mgr_aes256_cmac_submit_flush_sse.o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o obj/mb_mgr_aes_xcbc_flush_sse.o obj/mb_mgr_aes_xcbc_submit_sse.o obj/mb_mgr_hmac_md5_flush_sse.o obj/mb_mgr_hmac_md5_submit_sse.o obj/mb_mgr_hmac_flush_sse.o obj/mb_mgr_hmac_submit_sse.o obj/mb_mgr_hmac_sha_224_flush_sse.o obj/mb_mgr_hmac_sha_224_submit_sse.o obj/mb_mgr_hmac_sha_256_flush_sse.o obj/mb_mgr_hmac_sha_256_submit_sse.o obj/mb_mgr_hmac_sha_384_flush_sse.o obj/mb_mgr_hmac_sha_384_submit_sse.o obj/mb_mgr_hmac_sha_512_flush_sse.o obj/mb_mgr_hmac_sha_512_submit_sse.o obj/mb_mgr_hmac_flush_ni_sse.o obj/mb_mgr_hmac_submit_ni_sse.o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o obj/mb_mgr_zuc_submit_flush_sse.o obj/mb_mgr_zuc_submit_flush_gfni_sse.o obj/ethernet_fcs_sse.o obj/crc16_x25_sse.o obj/crc32_sctp_sse.o obj/aes_cbcs_1_9_enc_128_x4.o obj/aes128_cbcs_1_9_dec_by4_sse.o obj/crc32_refl_by8_sse.o obj/crc32_by8_sse.o obj/crc32_lte_sse.o obj/crc32_fp_sse.o obj/crc32_iuup_sse.o obj/crc32_wimax_sse.o obj/chacha20_sse.o obj/memcpy_sse.o obj/gcm128_sse.o obj/gcm192_sse.o obj/gcm256_sse.o obj/aes_cbc_enc_128_x8.o obj/aes_cbc_enc_192_x8.o obj/aes_cbc_enc_256_x8.o obj/aes128_cbc_dec_by8_avx.o obj/aes192_cbc_dec_by8_avx.o obj/aes256_cbc_dec_by8_avx.o obj/pon_avx.o obj/aes128_cntr_by8_avx.o obj/aes192_cntr_by8_avx.o obj/aes256_cntr_by8_avx.o obj/aes128_cntr_ccm_by8_avx.o obj/aes256_cntr_ccm_by8_avx.o obj/aes_ecb_by4_avx.o obj/aes_cfb_avx.o obj/aes128_cbc_mac_x8.o obj/aes256_cbc_mac_x8.o obj/aes_xcbc_mac_128_x8.o obj/md5_x4x2_avx.o obj/sha1_mult_avx.o obj/sha1_one_block_avx.o obj/sha224_one_block_avx.o obj/sha256_one_block_avx.o obj/sha_256_mult_avx.o obj/sha384_one_block_avx.o obj/sha512_one_block_avx.o obj/sha512_x2_avx.o obj/zuc_avx.o obj/mb_mgr_aes_flush_avx.o obj/mb_mgr_aes_submit_avx.o obj/mb_mgr_aes192_flush_avx.o obj/mb_mgr_aes192_submit_avx.o obj/mb_mgr_aes256_flush_avx.o obj/mb_mgr_aes256_submit_avx.o obj/mb_mgr_aes_cmac_submit_flush_avx.o obj/mb_mgr_aes256_cmac_submit_flush_avx.o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o obj/mb_mgr_aes_xcbc_flush_avx.o obj/mb_mgr_aes_xcbc_submit_avx.o obj/mb_mgr_hmac_md5_flush_avx.o obj/mb_mgr_hmac_md5_submit_avx.o obj/mb_mgr_hmac_flush_avx.o obj/mb_mgr_hmac_submit_avx.o obj/mb_mgr_hmac_sha_224_flush_avx.o obj/mb_mgr_hmac_sha_224_submit_avx.o obj/mb_mgr_hmac_sha_256_flush_avx.o obj/mb_mgr_hmac_sha_256_submit_avx.o obj/mb_mgr_hmac_sha_384_flush_avx.o obj/mb_mgr_hmac_sha_384_submit_avx.o obj/mb_mgr_hmac_sha_512_flush_avx.o obj/mb_mgr_hmac_sha_512_submit_avx.o obj/mb_mgr_zuc_submit_flush_avx.o obj/ethernet_fcs_avx.o obj/crc16_x25_avx.o obj/aes_cbcs_1_9_enc_128_x8.o obj/aes128_cbcs_1_9_dec_by8_avx.o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o obj/crc32_refl_by8_avx.o obj/crc32_by8_avx.o obj/crc32_sctp_avx.o obj/crc32_lte_avx.o obj/crc32_fp_avx.o obj/crc32_iuup_avx.o obj/crc32_wimax_avx.o obj/chacha20_avx.o obj/memcpy_avx.o obj/gcm128_avx_gen2.o obj/gcm192_avx_gen2.o obj/gcm256_avx_gen2.o obj/md5_x8x2_avx2.o obj/sha1_x8_avx2.o obj/sha256_oct_avx2.o obj/sha512_x4_avx2.o obj/zuc_avx2.o obj/mb_mgr_hmac_md5_flush_avx2.o obj/mb_mgr_hmac_md5_submit_avx2.o obj/mb_mgr_hmac_flush_avx2.o obj/mb_mgr_hmac_submit_avx2.o obj/mb_mgr_hmac_sha_224_flush_avx2.o obj/mb_mgr_hmac_sha_224_submit_avx2.o obj/mb_mgr_hmac_sha_256_flush_avx2.o obj/mb_mgr_hmac_sha_256_submit_avx2.o obj/mb_mgr_hmac_sha_384_flush_avx2.o obj/mb_mgr_hmac_sha_384_submit_avx2.o obj/mb_mgr_hmac_sha_512_flush_avx2.o obj/mb_mgr_hmac_sha_512_submit_avx2.o obj/mb_mgr_zuc_submit_flush_avx2.o obj/chacha20_avx2.o obj/gcm128_avx_gen4.o obj/gcm192_avx_gen4.o obj/gcm256_avx_gen4.o obj/sha1_x16_avx512.o obj/sha256_x16_avx512.o obj/sha512_x8_avx512.o obj/des_x16_avx512.o obj/cntr_vaes_avx512.o obj/cntr_ccm_vaes_avx512.o obj/aes_cbc_dec_vaes_avx512.o obj/aes_cbc_enc_vaes_avx512.o obj/aes_cbcs_enc_vaes_avx512.o obj/aes_cbcs_dec_vaes_avx512.o obj/aes_docsis_dec_avx512.o obj/aes_docsis_enc_avx512.o obj/aes_docsis_dec_vaes_avx512.o obj/aes_docsis_enc_vaes_avx512.o obj/zuc_avx512.o obj/mb_mgr_aes_submit_avx512.o obj/mb_mgr_aes_flush_avx512.o obj/mb_mgr_aes192_submit_avx512.o obj/mb_mgr_aes192_flush_avx512.o obj/mb_mgr_aes256_submit_avx512.o obj/mb_mgr_aes256_flush_avx512.o obj/mb_mgr_hmac_flush_avx512.o obj/mb_mgr_hmac_submit_avx512.o obj/mb_mgr_hmac_sha_224_flush_avx512.o obj/mb_mgr_hmac_sha_224_submit_avx512.o obj/mb_mgr_hmac_sha_256_flush_avx512.o obj/mb_mgr_hmac_sha_256_submit_avx512.o obj/mb_mgr_hmac_sha_384_flush_avx512.o obj/mb_mgr_hmac_sha_384_submit_avx512.o obj/mb_mgr_hmac_sha_512_flush_avx512.o obj/mb_mgr_hmac_sha_512_submit_avx512.o obj/mb_mgr_des_avx512.o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o obj/mb_mgr_zuc_submit_flush_avx512.o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o obj/chacha20_avx512.o obj/poly_avx512.o obj/poly_fma_avx512.o obj/ethernet_fcs_avx512.o obj/crc16_x25_avx512.o obj/crc32_refl_by16_vclmul_avx512.o obj/crc32_by16_vclmul_avx512.o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o obj/crc32_sctp_avx512.o obj/crc32_lte_avx512.o obj/crc32_fp_avx512.o obj/crc32_iuup_avx512.o obj/crc32_wimax_avx512.o obj/gcm128_vaes_avx512.o obj/gcm192_vaes_avx512.o obj/gcm256_vaes_avx512.o obj/gcm128_avx512.o obj/gcm192_avx512.o obj/gcm256_avx512.o obj/mb_mgr_avx.o obj/mb_mgr_avx2.o obj/mb_mgr_avx512.o obj/mb_mgr_sse.o obj/mb_mgr_sse_no_aesni.o obj/alloc.o obj/aes_xcbc_expand_key.o obj/md5_one_block.o obj/sha_sse.o obj/sha_avx.o obj/des_key.o obj/des_basic.o obj/version.o obj/cpu_feature.o obj/aesni_emu.o obj/kasumi_avx.o obj/kasumi_iv.o obj/kasumi_sse.o obj/zuc_sse_top.o obj/zuc_sse_no_aesni_top.o obj/zuc_avx_top.o obj/zuc_avx2_top.o obj/zuc_avx512_top.o obj/zuc_iv.o obj/snow3g_sse.o obj/snow3g_sse_no_aesni.o obj/snow3g_avx.o obj/snow3g_avx2.o obj/snow3g_tables.o obj/snow3g_iv.o obj/snow_v_sse.o obj/snow_v_sse_noaesni.o obj/mb_mgr_auto.o obj/error.o obj/gcm.o -lc 00:02:20.672 ln -f -s libIPSec_MB.so.1.0.0 ./libIPSec_MB.so.1 00:02:20.672 ln -f -s libIPSec_MB.so.1 ./libIPSec_MB.so 00:02:20.672 make[1]: Leaving directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:02:20.672 make -C test 00:02:20.672 make[1]: Entering directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/test' 00:02:20.932 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o main.o main.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o gcm_test.o gcm_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ctr_test.o ctr_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o customop_test.o customop_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o des_test.o des_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ccm_test.o ccm_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o cmac_test.o cmac_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o utils.o utils.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hmac_sha1_test.o hmac_sha1_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hmac_sha256_sha512_test.o hmac_sha256_sha512_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hmac_md5_test.o hmac_md5_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o aes_test.o aes_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o sha_test.o sha_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o chained_test.o chained_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o api_test.o api_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o pon_test.o pon_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ecb_test.o ecb_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o zuc_test.o zuc_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o kasumi_test.o kasumi_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o snow3g_test.o snow3g_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o direct_api_test.o direct_api_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o clear_mem_test.o clear_mem_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hec_test.o hec_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o xcbc_test.o xcbc_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o aes_cbcs_test.o aes_cbcs_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o crc_test.o crc_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o chacha_test.o chacha_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o poly1305_test.o poly1305_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o chacha20_poly1305_test.o chacha20_poly1305_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o null_test.o null_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o snow_v_test.o snow_v_test.c 00:02:20.933 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ipsec_xvalid.o ipsec_xvalid.c 00:02:20.933 nasm -MD misc.d -MT misc.o -o misc.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ misc.asm 00:02:20.933 ld -r -z ibt -z shstk -o misc.o.tmp misc.o 00:02:20.933 mv misc.o.tmp misc.o 00:02:20.933 utils.c:166:32: warning: argument 2 of type ‘uint8_t[6]’ {aka ‘unsigned char[6]’} with mismatched bound [-Warray-parameter=] 00:02:20.933 166 | uint8_t arch_support[IMB_ARCH_NUM], 00:02:20.933 | ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:20.933 In file included from utils.c:35: 00:02:20.933 utils.h:39:54: note: previously declared as ‘uint8_t *’ {aka ‘unsigned char *’} 00:02:20.933 39 | int update_flags_and_archs(const char *arg, uint8_t *arch_support, 00:02:20.933 | ~~~~~~~~~^~~~~~~~~~~~ 00:02:20.933 utils.c:207:21: warning: argument 1 of type ‘uint8_t[6]’ {aka ‘unsigned char[6]’} with mismatched bound [-Warray-parameter=] 00:02:20.933 207 | detect_arch(uint8_t arch_support[IMB_ARCH_NUM]) 00:02:20.933 | ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:20.933 utils.h:41:26: note: previously declared as ‘uint8_t *’ {aka ‘unsigned char *’} 00:02:20.934 41 | int detect_arch(uint8_t *arch_support); 00:02:20.934 | ~~~~~~~~~^~~~~~~~~~~~ 00:02:20.934 In file included from null_test.c:33: 00:02:20.934 null_test.c: In function ‘test_null_hash’: 00:02:20.934 ../lib/intel-ipsec-mb.h:1235:10: warning: ‘cipher_key’ may be used uninitialized [-Wmaybe-uninitialized] 00:02:20.934 1235 | ((_mgr)->keyexp_128((_raw), (_enc), (_dec))) 00:02:20.934 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:20.934 null_test.c:62:9: note: in expansion of macro ‘IMB_AES_KEYEXP_128’ 00:02:20.934 62 | IMB_AES_KEYEXP_128(mb_mgr, cipher_key, expkey, dust); 00:02:20.934 | ^~~~~~~~~~~~~~~~~~ 00:02:20.934 ../lib/intel-ipsec-mb.h:1235:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, void *, void *)’ 00:02:20.934 1235 | ((_mgr)->keyexp_128((_raw), (_enc), (_dec))) 00:02:20.934 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:20.934 null_test.c:62:9: note: in expansion of macro ‘IMB_AES_KEYEXP_128’ 00:02:20.934 62 | IMB_AES_KEYEXP_128(mb_mgr, cipher_key, expkey, dust); 00:02:20.934 | ^~~~~~~~~~~~~~~~~~ 00:02:20.934 null_test.c:47:33: note: ‘cipher_key’ declared here 00:02:20.934 47 | DECLARE_ALIGNED(uint8_t cipher_key[16], 16); 00:02:20.934 | ^~~~~~~~~~ 00:02:20.934 ../lib/intel-ipsec-mb.h:51:9: note: in definition of macro ‘DECLARE_ALIGNED’ 00:02:20.934 51 | decl __attribute__((aligned(alignval))) 00:02:20.934 | ^~~~ 00:02:22.312 gcc -fPIE -z noexecstack -z relro -z now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib ipsec_xvalid.o utils.o misc.o -lIPSec_MB -o ipsec_xvalid_test 00:02:22.312 gcc -fPIE -z noexecstack -z relro -z now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib main.o gcm_test.o ctr_test.o customop_test.o des_test.o ccm_test.o cmac_test.o utils.o hmac_sha1_test.o hmac_sha256_sha512_test.o hmac_md5_test.o aes_test.o sha_test.o chained_test.o api_test.o pon_test.o ecb_test.o zuc_test.o kasumi_test.o snow3g_test.o direct_api_test.o clear_mem_test.o hec_test.o xcbc_test.o aes_cbcs_test.o crc_test.o chacha_test.o poly1305_test.o chacha20_poly1305_test.o null_test.o snow_v_test.o -lIPSec_MB -o ipsec_MB_testapp 00:02:22.312 make[1]: Leaving directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/test' 00:02:22.312 make -C perf 00:02:22.312 make[1]: Entering directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/perf' 00:02:22.571 gcc -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -pthread -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -fPIE -fstack-protector -D_FORTIFY_SOURCE=2 -c -o ipsec_perf.o ipsec_perf.c 00:02:22.571 gcc -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -pthread -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -fPIE -fstack-protector -D_FORTIFY_SOURCE=2 -c -o msr.o msr.c 00:02:22.571 nasm -MD misc.d -MT misc.o -o misc.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ misc.asm 00:02:22.571 ld -r -z ibt -z shstk -o misc.o.tmp misc.o 00:02:22.571 mv misc.o.tmp misc.o 00:02:23.506 In file included from ipsec_perf.c:59: 00:02:23.506 ipsec_perf.c: In function ‘do_test_gcm’: 00:02:23.506 ../lib/intel-ipsec-mb.h:1382:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized] 00:02:23.506 1382 | ((_mgr)->gcm128_pre((_key_in), (_key_exp))) 00:02:23.506 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:23.506 ipsec_perf.c:1937:17: note: in expansion of macro ‘IMB_AES128_GCM_PRE’ 00:02:23.506 1937 | IMB_AES128_GCM_PRE(mb_mgr, key, &gdata_key); 00:02:23.506 | ^~~~~~~~~~~~~~~~~~ 00:02:23.506 ../lib/intel-ipsec-mb.h:1382:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’ 00:02:23.506 1382 | ((_mgr)->gcm128_pre((_key_in), (_key_exp))) 00:02:23.506 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:23.506 ipsec_perf.c:1937:17: note: in expansion of macro ‘IMB_AES128_GCM_PRE’ 00:02:23.506 1937 | IMB_AES128_GCM_PRE(mb_mgr, key, &gdata_key); 00:02:23.506 | ^~~~~~~~~~~~~~~~~~ 00:02:23.506 ../lib/intel-ipsec-mb.h:1384:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized] 00:02:23.506 1384 | ((_mgr)->gcm192_pre((_key_in), (_key_exp))) 00:02:23.506 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:23.506 ipsec_perf.c:1940:17: note: in expansion of macro ‘IMB_AES192_GCM_PRE’ 00:02:23.506 1940 | IMB_AES192_GCM_PRE(mb_mgr, key, &gdata_key); 00:02:23.506 | ^~~~~~~~~~~~~~~~~~ 00:02:23.506 ../lib/intel-ipsec-mb.h:1384:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’ 00:02:23.506 1384 | ((_mgr)->gcm192_pre((_key_in), (_key_exp))) 00:02:23.506 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:23.506 ipsec_perf.c:1940:17: note: in expansion of macro ‘IMB_AES192_GCM_PRE’ 00:02:23.506 1940 | IMB_AES192_GCM_PRE(mb_mgr, key, &gdata_key); 00:02:23.506 | ^~~~~~~~~~~~~~~~~~ 00:02:23.506 ../lib/intel-ipsec-mb.h:1386:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized] 00:02:23.506 1386 | ((_mgr)->gcm256_pre((_key_in), (_key_exp))) 00:02:23.506 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:23.506 ipsec_perf.c:1944:17: note: in expansion of macro ‘IMB_AES256_GCM_PRE’ 00:02:23.506 1944 | IMB_AES256_GCM_PRE(mb_mgr, key, &gdata_key); 00:02:23.506 | ^~~~~~~~~~~~~~~~~~ 00:02:23.506 ../lib/intel-ipsec-mb.h:1386:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’ 00:02:23.506 1386 | ((_mgr)->gcm256_pre((_key_in), (_key_exp))) 00:02:23.506 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:02:23.506 ipsec_perf.c:1944:17: note: in expansion of macro ‘IMB_AES256_GCM_PRE’ 00:02:23.506 1944 | IMB_AES256_GCM_PRE(mb_mgr, key, &gdata_key); 00:02:23.506 | ^~~~~~~~~~~~~~~~~~ 00:02:23.765 gcc -fPIE -z noexecstack -z relro -z now -pthread -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib ipsec_perf.o msr.o misc.o -lIPSec_MB -o ipsec_perf 00:02:23.765 make[1]: Leaving directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/perf' 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@119 -- $ DPDK_DRIVERS+=("crypto") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@120 -- $ DPDK_DRIVERS+=("$intel_ipsec_mb_drv") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@121 -- $ DPDK_DRIVERS+=("crypto/qat") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@122 -- $ DPDK_DRIVERS+=("compress/qat") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@123 -- $ DPDK_DRIVERS+=("common/qat") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@125 -- $ ge 22.11.4 21.11.0 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '>=' 21.11.0 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=>=' 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:23.765 02:08:14 build_native_dpdk -- scripts/common.sh@364 -- $ return 0 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@128 -- $ DPDK_DRIVERS+=("bus/auxiliary") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@129 -- $ DPDK_DRIVERS+=("common/mlx5") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@130 -- $ DPDK_DRIVERS+=("common/mlx5/linux") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@131 -- $ DPDK_DRIVERS+=("crypto/mlx5") 00:02:23.765 02:08:14 build_native_dpdk -- common/autobuild_common.sh@132 -- $ mlx5_libs_added=y 00:02:23.766 02:08:14 build_native_dpdk -- common/autobuild_common.sh@134 -- $ dpdk_cflags+=' -I/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:02:23.766 02:08:14 build_native_dpdk -- common/autobuild_common.sh@135 -- $ dpdk_ldflags+=' -L/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:02:23.766 02:08:14 build_native_dpdk -- common/autobuild_common.sh@136 -- $ export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:02:23.766 02:08:14 build_native_dpdk -- common/autobuild_common.sh@136 -- $ LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:02:23.766 02:08:14 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 1 -eq 1 ]] 00:02:23.766 02:08:14 build_native_dpdk -- common/autobuild_common.sh@140 -- $ isal_dir=/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l 00:02:23.766 02:08:14 build_native_dpdk -- common/autobuild_common.sh@141 -- $ git clone --branch v2.29.0 --depth 1 https://github.com/intel/isa-l.git /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l 00:02:23.766 Cloning into '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l'... 00:02:24.701 Note: switching to '806b55ee578efd8158962b90121a4568eb1ecb66'. 00:02:24.701 00:02:24.701 You are in 'detached HEAD' state. You can look around, make experimental 00:02:24.701 changes and commit them, and you can discard any commits you make in this 00:02:24.701 state without impacting any branches by switching back to a branch. 00:02:24.701 00:02:24.701 If you want to create a new branch to retain commits you create, you may 00:02:24.701 do so (now or later) by using -c with the switch command. Example: 00:02:24.701 00:02:24.701 git switch -c 00:02:24.701 00:02:24.701 Or undo this operation with: 00:02:24.701 00:02:24.701 git switch - 00:02:24.701 00:02:24.701 Turn off this advice by setting config variable advice.detachedHead to false 00:02:24.701 00:02:24.701 02:08:15 build_native_dpdk -- common/autobuild_common.sh@143 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l 00:02:24.701 02:08:15 build_native_dpdk -- common/autobuild_common.sh@144 -- $ ./autogen.sh 00:02:27.988 libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'build-aux'. 00:02:27.988 libtoolize: linking file 'build-aux/ltmain.sh' 00:02:28.556 libtoolize: Consider adding 'AC_CONFIG_MACRO_DIRS([m4])' to configure.ac, 00:02:28.556 libtoolize: and rerunning libtoolize and aclocal. 00:02:28.556 libtoolize: Consider adding '-I m4' to ACLOCAL_AMFLAGS in Makefile.am. 00:02:29.933 configure.ac:53: warning: The macro `AC_PROG_CC_STDC' is obsolete. 00:02:29.933 configure.ac:53: You should run autoupdate. 00:02:29.933 ./lib/autoconf/c.m4:1666: AC_PROG_CC_STDC is expanded from... 00:02:29.933 configure.ac:53: the top level 00:02:31.310 configure.ac:23: installing 'build-aux/compile' 00:02:31.310 configure.ac:25: installing 'build-aux/config.guess' 00:02:31.311 configure.ac:25: installing 'build-aux/config.sub' 00:02:31.311 configure.ac:12: installing 'build-aux/install-sh' 00:02:31.311 configure.ac:12: installing 'build-aux/missing' 00:02:31.311 Makefile.am: installing 'build-aux/depcomp' 00:02:31.311 parallel-tests: installing 'build-aux/test-driver' 00:02:31.311 00:02:31.311 ---------------------------------------------------------------- 00:02:31.311 Initialized build system. For a common configuration please run: 00:02:31.311 ---------------------------------------------------------------- 00:02:31.311 00:02:31.311 ./configure --prefix=/usr --libdir=/usr/lib64 00:02:31.311 00:02:31.311 02:08:21 build_native_dpdk -- common/autobuild_common.sh@145 -- $ ./configure 'CFLAGS=-fPIC -g -O2' --enable-shared=yes --prefix=/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build 00:02:31.570 checking for a BSD-compatible install... /usr/bin/install -c 00:02:31.570 checking whether build environment is sane... yes 00:02:31.829 checking for a race-free mkdir -p... /usr/bin/mkdir -p 00:02:31.829 checking for gawk... gawk 00:02:31.829 checking whether make sets $(MAKE)... yes 00:02:31.829 checking whether make supports nested variables... yes 00:02:31.829 checking how to create a pax tar archive... gnutar 00:02:31.829 checking whether make supports the include directive... yes (GNU style) 00:02:31.829 checking for gcc... gcc 00:02:32.088 checking whether the C compiler works... yes 00:02:32.088 checking for C compiler default output file name... a.out 00:02:32.088 checking for suffix of executables... 00:02:32.348 checking whether we are cross compiling... no 00:02:32.348 checking for suffix of object files... o 00:02:32.348 checking whether the compiler supports GNU C... yes 00:02:32.348 checking whether gcc accepts -g... yes 00:02:32.608 checking for gcc option to enable C11 features... none needed 00:02:32.608 checking whether gcc understands -c and -o together... yes 00:02:32.608 checking dependency style of gcc... gcc3 00:02:32.867 checking dependency style of gcc... gcc3 00:02:32.867 checking build system type... x86_64-pc-linux-gnu 00:02:32.867 checking host system type... x86_64-pc-linux-gnu 00:02:32.867 checking for stdio.h... yes 00:02:33.126 checking for stdlib.h... yes 00:02:33.126 checking for string.h... yes 00:02:33.126 checking for inttypes.h... yes 00:02:33.126 checking for stdint.h... yes 00:02:33.126 checking for strings.h... yes 00:02:33.384 checking for sys/stat.h... yes 00:02:33.384 checking for sys/types.h... yes 00:02:33.384 checking for unistd.h... yes 00:02:33.384 checking for wchar.h... yes 00:02:33.643 checking for minix/config.h... no 00:02:33.643 checking whether it is safe to define __EXTENSIONS__... yes 00:02:33.643 checking whether _XOPEN_SOURCE should be defined... no 00:02:33.643 checking whether make supports nested variables... (cached) yes 00:02:33.643 checking how to print strings... printf 00:02:33.643 checking for a sed that does not truncate output... /usr/bin/sed 00:02:33.643 checking for grep that handles long lines and -e... /usr/bin/grep 00:02:33.643 checking for egrep... /usr/bin/grep -E 00:02:33.643 checking for fgrep... /usr/bin/grep -F 00:02:33.643 checking for ld used by gcc... /usr/bin/ld 00:02:33.902 checking if the linker (/usr/bin/ld) is GNU ld... yes 00:02:33.902 checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B 00:02:33.902 checking the name lister (/usr/bin/nm -B) interface... BSD nm 00:02:33.902 checking whether ln -s works... yes 00:02:33.902 checking the maximum length of command line arguments... 1572864 00:02:33.902 checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop 00:02:33.902 checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop 00:02:33.902 checking for /usr/bin/ld option to reload object files... -r 00:02:33.902 checking for file... file 00:02:33.902 checking for objdump... objdump 00:02:33.902 checking how to recognize dependent libraries... pass_all 00:02:33.902 checking for dlltool... no 00:02:33.902 checking how to associate runtime and link libraries... printf %s\n 00:02:33.902 checking for ar... ar 00:02:33.902 checking for archiver @FILE support... @ 00:02:33.902 checking for strip... strip 00:02:33.902 checking for ranlib... ranlib 00:02:34.161 checking command to parse /usr/bin/nm -B output from gcc object... ok 00:02:34.161 checking for sysroot... no 00:02:34.161 checking for a working dd... /usr/bin/dd 00:02:34.161 checking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1 00:02:34.421 checking for mt... no 00:02:34.421 checking if : is a manifest tool... no 00:02:34.421 checking for dlfcn.h... yes 00:02:34.421 checking for objdir... .libs 00:02:34.681 checking if gcc supports -fno-rtti -fno-exceptions... no 00:02:34.681 checking for gcc option to produce PIC... -fPIC -DPIC 00:02:34.681 checking if gcc PIC flag -fPIC -DPIC works... yes 00:02:34.941 checking if gcc static flag -static works... yes 00:02:34.941 checking if gcc supports -c -o file.o... yes 00:02:34.941 checking if gcc supports -c -o file.o... (cached) yes 00:02:34.941 checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes 00:02:35.200 checking whether -lc should be explicitly linked in... no 00:02:35.200 checking dynamic linker characteristics... GNU/Linux ld.so 00:02:35.200 checking how to hardcode library paths into programs... immediate 00:02:35.200 checking whether stripping libraries is possible... yes 00:02:35.200 checking if libtool supports shared libraries... yes 00:02:35.200 checking whether to build shared libraries... yes 00:02:35.200 checking whether to build static libraries... yes 00:02:35.200 checking for a sed that does not truncate output... (cached) /usr/bin/sed 00:02:35.200 checking for yasm... yes 00:02:35.200 checking for modern yasm... yes 00:02:35.200 checking for optional yasm AVX512 support... no 00:02:35.200 checking for nasm... yes 00:02:35.200 checking for modern nasm... yes 00:02:35.459 checking for optional nasm AVX512 support... yes 00:02:35.459 checking for additional nasm AVX512 support... yes 00:02:35.459 Using nasm args target "linux" "-f elf64" 00:02:35.459 checking for limits.h... yes 00:02:35.459 checking for stdint.h... (cached) yes 00:02:35.459 checking for stdlib.h... (cached) yes 00:02:35.459 checking for string.h... (cached) yes 00:02:35.459 checking for inline... inline 00:02:35.718 checking for size_t... yes 00:02:35.718 checking for uint16_t... yes 00:02:35.718 checking for uint32_t... yes 00:02:35.718 checking for uint64_t... yes 00:02:35.977 checking for uint8_t... yes 00:02:35.977 checking for GNU libc compatible malloc... yes 00:02:36.236 checking for memmove... yes 00:02:36.236 checking for memset... yes 00:02:36.236 checking for getopt... yes 00:02:36.495 checking that generated files are newer than configure... done 00:02:36.495 configure: creating ./config.status 00:02:37.874 config.status: creating Makefile 00:02:37.874 config.status: creating libisal.pc 00:02:37.874 config.status: executing depfiles commands 00:02:39.846 config.status: executing libtool commands 00:02:39.846 00:02:39.846 isa-l 2.29.0 00:02:39.846 ===== 00:02:39.846 00:02:39.846 prefix: /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build 00:02:39.846 sysconfdir: ${prefix}/etc 00:02:39.846 libdir: ${exec_prefix}/lib 00:02:39.846 includedir: ${prefix}/include 00:02:39.846 00:02:39.846 compiler: gcc 00:02:39.846 cflags: -fPIC -g -O2 00:02:39.846 ldflags: 00:02:39.846 00:02:39.846 debug: no 00:02:39.846 00:02:39.846 02:08:29 build_native_dpdk -- common/autobuild_common.sh@146 -- $ ln -s /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/include /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/isa-l 00:02:39.846 02:08:29 build_native_dpdk -- common/autobuild_common.sh@147 -- $ make -j72 all 00:02:39.846 Building isa-l.h 00:02:39.846 make --no-print-directory all-am 00:02:39.846 MKTMP erasure_code/gf_vect_mul_sse.s 00:02:39.846 CC erasure_code/ec_highlevel_func.lo 00:02:39.846 MKTMP erasure_code/gf_vect_mul_avx.s 00:02:39.846 MKTMP erasure_code/gf_vect_dot_prod_sse.s 00:02:39.846 MKTMP erasure_code/gf_vect_dot_prod_avx.s 00:02:39.846 MKTMP erasure_code/gf_vect_dot_prod_avx2.s 00:02:39.846 MKTMP erasure_code/gf_2vect_dot_prod_sse.s 00:02:39.846 MKTMP erasure_code/gf_3vect_dot_prod_sse.s 00:02:39.846 MKTMP erasure_code/gf_4vect_dot_prod_sse.s 00:02:39.846 MKTMP erasure_code/gf_5vect_dot_prod_sse.s 00:02:39.846 MKTMP erasure_code/gf_6vect_dot_prod_sse.s 00:02:39.846 MKTMP erasure_code/gf_2vect_dot_prod_avx.s 00:02:39.846 MKTMP erasure_code/gf_3vect_dot_prod_avx.s 00:02:39.846 MKTMP erasure_code/gf_4vect_dot_prod_avx.s 00:02:39.846 MKTMP erasure_code/gf_5vect_dot_prod_avx.s 00:02:39.846 MKTMP erasure_code/gf_6vect_dot_prod_avx.s 00:02:39.846 MKTMP erasure_code/gf_2vect_dot_prod_avx2.s 00:02:39.846 MKTMP erasure_code/gf_3vect_dot_prod_avx2.s 00:02:39.846 MKTMP erasure_code/gf_4vect_dot_prod_avx2.s 00:02:39.846 MKTMP erasure_code/gf_5vect_dot_prod_avx2.s 00:02:39.846 MKTMP erasure_code/gf_6vect_dot_prod_avx2.s 00:02:39.846 MKTMP erasure_code/gf_vect_mad_sse.s 00:02:39.846 MKTMP erasure_code/gf_2vect_mad_sse.s 00:02:39.846 MKTMP erasure_code/gf_3vect_mad_sse.s 00:02:39.846 MKTMP erasure_code/gf_4vect_mad_sse.s 00:02:39.846 MKTMP erasure_code/gf_5vect_mad_sse.s 00:02:39.846 MKTMP erasure_code/gf_6vect_mad_sse.s 00:02:39.846 MKTMP erasure_code/gf_vect_mad_avx.s 00:02:39.846 MKTMP erasure_code/gf_2vect_mad_avx.s 00:02:39.846 MKTMP erasure_code/gf_3vect_mad_avx.s 00:02:39.846 MKTMP erasure_code/gf_4vect_mad_avx.s 00:02:39.846 MKTMP erasure_code/gf_5vect_mad_avx.s 00:02:39.846 MKTMP erasure_code/gf_6vect_mad_avx.s 00:02:39.846 MKTMP erasure_code/gf_vect_mad_avx2.s 00:02:39.846 MKTMP erasure_code/gf_2vect_mad_avx2.s 00:02:39.846 MKTMP erasure_code/gf_3vect_mad_avx2.s 00:02:39.846 MKTMP erasure_code/gf_4vect_mad_avx2.s 00:02:39.846 MKTMP erasure_code/gf_5vect_mad_avx2.s 00:02:39.846 MKTMP erasure_code/gf_6vect_mad_avx2.s 00:02:39.846 MKTMP erasure_code/ec_multibinary.s 00:02:39.846 MKTMP erasure_code/gf_vect_dot_prod_avx512.s 00:02:39.846 MKTMP erasure_code/gf_2vect_dot_prod_avx512.s 00:02:39.846 MKTMP erasure_code/gf_3vect_dot_prod_avx512.s 00:02:39.846 MKTMP erasure_code/gf_4vect_dot_prod_avx512.s 00:02:39.846 MKTMP erasure_code/gf_5vect_dot_prod_avx512.s 00:02:39.846 MKTMP erasure_code/gf_6vect_dot_prod_avx512.s 00:02:39.846 MKTMP erasure_code/gf_vect_mad_avx512.s 00:02:39.846 MKTMP erasure_code/gf_2vect_mad_avx512.s 00:02:39.846 MKTMP erasure_code/gf_3vect_mad_avx512.s 00:02:39.846 MKTMP erasure_code/gf_4vect_mad_avx512.s 00:02:39.846 MKTMP erasure_code/gf_5vect_mad_avx512.s 00:02:39.846 MKTMP erasure_code/gf_6vect_mad_avx512.s 00:02:39.846 MKTMP raid/xor_gen_sse.s 00:02:39.846 MKTMP raid/pq_gen_sse.s 00:02:39.846 MKTMP raid/xor_check_sse.s 00:02:39.846 MKTMP raid/pq_check_sse.s 00:02:39.846 MKTMP raid/pq_gen_avx.s 00:02:39.846 MKTMP raid/xor_gen_avx.s 00:02:39.846 MKTMP raid/pq_gen_avx2.s 00:02:39.846 MKTMP raid/xor_gen_avx512.s 00:02:39.846 MKTMP raid/pq_gen_avx512.s 00:02:39.846 MKTMP raid/raid_multibinary.s 00:02:39.846 MKTMP crc/crc16_t10dif_01.s 00:02:39.846 MKTMP crc/crc16_t10dif_by4.s 00:02:39.846 MKTMP crc/crc16_t10dif_02.s 00:02:39.846 MKTMP crc/crc16_t10dif_by16_10.s 00:02:39.846 MKTMP crc/crc16_t10dif_copy_by4.s 00:02:39.846 MKTMP crc/crc16_t10dif_copy_by4_02.s 00:02:39.846 MKTMP crc/crc32_ieee_01.s 00:02:39.847 MKTMP crc/crc32_ieee_02.s 00:02:39.847 MKTMP crc/crc32_ieee_by4.s 00:02:39.847 MKTMP crc/crc32_ieee_by16_10.s 00:02:39.847 MKTMP crc/crc32_iscsi_01.s 00:02:39.847 MKTMP crc/crc32_iscsi_00.s 00:02:39.847 MKTMP crc/crc_multibinary.s 00:02:39.847 MKTMP crc/crc64_multibinary.s 00:02:39.847 MKTMP crc/crc64_ecma_refl_by8.s 00:02:39.847 MKTMP crc/crc64_ecma_refl_by16_10.s 00:02:39.847 MKTMP crc/crc64_ecma_norm_by8.s 00:02:39.847 MKTMP crc/crc64_ecma_norm_by16_10.s 00:02:39.847 MKTMP crc/crc64_iso_refl_by8.s 00:02:39.847 MKTMP crc/crc64_iso_refl_by16_10.s 00:02:39.847 MKTMP crc/crc64_iso_norm_by8.s 00:02:39.847 MKTMP crc/crc64_iso_norm_by16_10.s 00:02:39.847 MKTMP crc/crc64_jones_refl_by8.s 00:02:39.847 MKTMP crc/crc64_jones_refl_by16_10.s 00:02:39.847 MKTMP crc/crc64_jones_norm_by8.s 00:02:39.847 MKTMP crc/crc64_jones_norm_by16_10.s 00:02:39.847 MKTMP crc/crc32_gzip_refl_by8.s 00:02:39.847 MKTMP crc/crc32_gzip_refl_by8_02.s 00:02:39.847 MKTMP crc/crc32_gzip_refl_by16_10.s 00:02:39.847 MKTMP igzip/igzip_body.s 00:02:39.847 MKTMP igzip/igzip_finish.s 00:02:39.847 MKTMP igzip/igzip_icf_body_h1_gr_bt.s 00:02:39.847 MKTMP igzip/igzip_icf_finish.s 00:02:39.847 MKTMP igzip/rfc1951_lookup.s 00:02:39.847 MKTMP igzip/adler32_sse.s 00:02:40.109 MKTMP igzip/adler32_avx2_4.s 00:02:40.109 MKTMP igzip/igzip_multibinary.s 00:02:40.109 MKTMP igzip/igzip_update_histogram_01.s 00:02:40.109 MKTMP igzip/igzip_update_histogram_04.s 00:02:40.109 MKTMP igzip/igzip_decode_block_stateless_01.s 00:02:40.109 MKTMP igzip/igzip_decode_block_stateless_04.s 00:02:40.109 MKTMP igzip/igzip_inflate_multibinary.s 00:02:40.109 MKTMP igzip/encode_df_04.s 00:02:40.109 MKTMP igzip/encode_df_06.s 00:02:40.109 MKTMP igzip/proc_heap.s 00:02:40.109 MKTMP igzip/igzip_deflate_hash.s 00:02:40.109 MKTMP igzip/igzip_gen_icf_map_lh1_06.s 00:02:40.109 MKTMP igzip/igzip_gen_icf_map_lh1_04.s 00:02:40.109 MKTMP igzip/igzip_set_long_icf_fg_04.s 00:02:40.109 MKTMP igzip/igzip_set_long_icf_fg_06.s 00:02:40.109 MKTMP mem/mem_zero_detect_avx.s 00:02:40.109 help2man -o programs/igzip.1 -i programs/igzip.1.h2m -N ./programs/igzip 00:02:40.109 MKTMP mem/mem_zero_detect_sse.s 00:02:40.109 MKTMP mem/mem_multibinary.s 00:02:40.109 CC programs/igzip_cli.o 00:02:40.109 CC erasure_code/ec_base.lo 00:02:40.109 CC raid/raid_base.lo 00:02:40.109 CC crc/crc_base.lo 00:02:40.110 CC crc/crc64_base.lo 00:02:40.110 CC igzip/igzip.lo 00:02:40.110 CC igzip/hufftables_c.lo 00:02:40.110 CC igzip/igzip_base.lo 00:02:40.110 CC igzip/igzip_icf_base.lo 00:02:40.110 CC igzip/adler32_base.lo 00:02:40.110 CC igzip/flatten_ll.lo 00:02:40.110 CC igzip/encode_df.lo 00:02:40.110 CC igzip/igzip_icf_body.lo 00:02:40.110 CC igzip/huff_codes.lo 00:02:40.110 CC igzip/igzip_inflate.lo 00:02:40.110 CC mem/mem_zero_detect_base.lo 00:02:40.110 CCAS erasure_code/gf_vect_mul_avx.lo 00:02:40.110 CCAS erasure_code/gf_vect_mul_sse.lo 00:02:40.110 CCAS erasure_code/gf_vect_dot_prod_sse.lo 00:02:40.110 CCAS erasure_code/gf_vect_dot_prod_avx.lo 00:02:40.110 CCAS erasure_code/gf_vect_dot_prod_avx2.lo 00:02:40.110 CCAS erasure_code/gf_2vect_dot_prod_sse.lo 00:02:40.110 CCAS erasure_code/gf_3vect_dot_prod_sse.lo 00:02:40.110 CCAS erasure_code/gf_4vect_dot_prod_sse.lo 00:02:40.110 CCAS erasure_code/gf_5vect_dot_prod_sse.lo 00:02:40.110 CCAS erasure_code/gf_6vect_dot_prod_sse.lo 00:02:40.110 CCAS erasure_code/gf_3vect_dot_prod_avx.lo 00:02:40.110 CCAS erasure_code/gf_2vect_dot_prod_avx.lo 00:02:40.110 CCAS erasure_code/gf_4vect_dot_prod_avx.lo 00:02:40.110 CCAS erasure_code/gf_5vect_dot_prod_avx.lo 00:02:40.110 CCAS erasure_code/gf_6vect_dot_prod_avx.lo 00:02:40.110 CCAS erasure_code/gf_2vect_dot_prod_avx2.lo 00:02:40.110 CCAS erasure_code/gf_3vect_dot_prod_avx2.lo 00:02:40.110 CCAS erasure_code/gf_4vect_dot_prod_avx2.lo 00:02:40.110 CCAS erasure_code/gf_6vect_dot_prod_avx2.lo 00:02:40.110 CCAS erasure_code/gf_5vect_dot_prod_avx2.lo 00:02:40.110 help2man: can't get `--help' info from ./programs/igzip 00:02:40.110 Try `--no-discard-stderr' if option outputs to stderr 00:02:40.110 CCAS erasure_code/gf_vect_mad_sse.lo 00:02:40.110 CCAS erasure_code/gf_2vect_mad_sse.lo 00:02:40.110 make[1]: [Makefile:4685: programs/igzip.1] Error 127 (ignored) 00:02:40.110 CCAS erasure_code/gf_4vect_mad_sse.lo 00:02:40.110 CCAS erasure_code/gf_vect_mad_avx.lo 00:02:40.110 CCAS erasure_code/gf_6vect_mad_sse.lo 00:02:40.110 CCAS erasure_code/gf_3vect_mad_sse.lo 00:02:40.110 CCAS erasure_code/gf_5vect_mad_sse.lo 00:02:40.110 CCAS erasure_code/gf_2vect_mad_avx.lo 00:02:40.110 CCAS erasure_code/gf_3vect_mad_avx.lo 00:02:40.110 CCAS erasure_code/gf_4vect_mad_avx.lo 00:02:40.110 CCAS erasure_code/gf_5vect_mad_avx.lo 00:02:40.110 CCAS erasure_code/gf_6vect_mad_avx.lo 00:02:40.110 CCAS erasure_code/gf_vect_mad_avx2.lo 00:02:40.110 CCAS erasure_code/gf_2vect_mad_avx2.lo 00:02:40.110 CCAS erasure_code/gf_3vect_mad_avx2.lo 00:02:40.110 CCAS erasure_code/gf_4vect_mad_avx2.lo 00:02:40.110 CCAS erasure_code/gf_6vect_mad_avx2.lo 00:02:40.110 CCAS erasure_code/ec_multibinary.lo 00:02:40.110 CCAS erasure_code/gf_vect_dot_prod_avx512.lo 00:02:40.110 CCAS erasure_code/gf_5vect_mad_avx2.lo 00:02:40.110 CCAS erasure_code/gf_2vect_dot_prod_avx512.lo 00:02:40.110 CCAS erasure_code/gf_3vect_dot_prod_avx512.lo 00:02:40.110 CCAS erasure_code/gf_4vect_dot_prod_avx512.lo 00:02:40.110 CCAS erasure_code/gf_5vect_dot_prod_avx512.lo 00:02:40.110 CCAS erasure_code/gf_6vect_dot_prod_avx512.lo 00:02:40.110 CCAS erasure_code/gf_2vect_mad_avx512.lo 00:02:40.110 CCAS erasure_code/gf_3vect_mad_avx512.lo 00:02:40.110 CCAS erasure_code/gf_vect_mad_avx512.lo 00:02:40.110 CCAS erasure_code/gf_4vect_mad_avx512.lo 00:02:40.110 CCAS erasure_code/gf_5vect_mad_avx512.lo 00:02:40.110 CCAS erasure_code/gf_6vect_mad_avx512.lo 00:02:40.110 CCAS raid/xor_gen_sse.lo 00:02:40.110 CCAS raid/pq_gen_sse.lo 00:02:40.110 CCAS raid/xor_check_sse.lo 00:02:40.110 CCAS raid/pq_check_sse.lo 00:02:40.110 CCAS raid/pq_gen_avx.lo 00:02:40.110 CCAS raid/xor_gen_avx.lo 00:02:40.110 CCAS raid/pq_gen_avx2.lo 00:02:40.110 CCAS raid/xor_gen_avx512.lo 00:02:40.110 CCAS raid/pq_gen_avx512.lo 00:02:40.110 CCAS raid/raid_multibinary.lo 00:02:40.110 CCAS crc/crc16_t10dif_01.lo 00:02:40.110 CCAS crc/crc16_t10dif_by4.lo 00:02:40.110 CCAS crc/crc16_t10dif_02.lo 00:02:40.110 CCAS crc/crc16_t10dif_by16_10.lo 00:02:40.110 CCAS crc/crc16_t10dif_copy_by4.lo 00:02:40.110 CCAS crc/crc16_t10dif_copy_by4_02.lo 00:02:40.110 CCAS crc/crc32_ieee_01.lo 00:02:40.110 CCAS crc/crc32_ieee_02.lo 00:02:40.110 CCAS crc/crc32_ieee_by4.lo 00:02:40.110 CCAS crc/crc32_ieee_by16_10.lo 00:02:40.110 CCAS crc/crc32_iscsi_01.lo 00:02:40.110 CCAS crc/crc32_iscsi_00.lo 00:02:40.110 CCAS crc/crc_multibinary.lo 00:02:40.110 CCAS crc/crc64_multibinary.lo 00:02:40.110 CCAS crc/crc64_ecma_refl_by8.lo 00:02:40.110 CCAS crc/crc64_ecma_refl_by16_10.lo 00:02:40.110 CCAS crc/crc64_ecma_norm_by8.lo 00:02:40.370 CCAS crc/crc64_ecma_norm_by16_10.lo 00:02:40.370 CCAS crc/crc64_iso_refl_by8.lo 00:02:40.370 CCAS crc/crc64_iso_refl_by16_10.lo 00:02:40.370 CCAS crc/crc64_iso_norm_by8.lo 00:02:40.370 CCAS crc/crc64_iso_norm_by16_10.lo 00:02:40.370 CCAS crc/crc64_jones_refl_by8.lo 00:02:40.370 CCAS crc/crc64_jones_refl_by16_10.lo 00:02:40.370 CCAS crc/crc64_jones_norm_by8.lo 00:02:40.371 CCAS crc/crc64_jones_norm_by16_10.lo 00:02:40.371 CCAS crc/crc32_gzip_refl_by8.lo 00:02:40.371 CCAS crc/crc32_gzip_refl_by8_02.lo 00:02:40.371 CCAS crc/crc32_gzip_refl_by16_10.lo 00:02:40.371 CCAS igzip/igzip_body.lo 00:02:40.371 CCAS igzip/igzip_finish.lo 00:02:40.371 CCAS igzip/igzip_icf_body_h1_gr_bt.lo 00:02:40.371 CCAS igzip/igzip_icf_finish.lo 00:02:40.371 CCAS igzip/rfc1951_lookup.lo 00:02:40.371 CCAS igzip/adler32_sse.lo 00:02:40.371 CCAS igzip/adler32_avx2_4.lo 00:02:40.371 CCAS igzip/igzip_multibinary.lo 00:02:40.371 CCAS igzip/igzip_update_histogram_01.lo 00:02:40.371 CCAS igzip/igzip_decode_block_stateless_01.lo 00:02:40.371 CCAS igzip/igzip_decode_block_stateless_04.lo 00:02:40.371 CCAS igzip/igzip_update_histogram_04.lo 00:02:40.371 CCAS igzip/igzip_inflate_multibinary.lo 00:02:40.371 CCAS igzip/encode_df_04.lo 00:02:40.371 CCAS igzip/encode_df_06.lo 00:02:40.371 CCAS igzip/proc_heap.lo 00:02:40.371 CCAS igzip/igzip_deflate_hash.lo 00:02:40.371 CCAS igzip/igzip_gen_icf_map_lh1_06.lo 00:02:40.371 CCAS igzip/igzip_gen_icf_map_lh1_04.lo 00:02:40.371 CCAS igzip/igzip_set_long_icf_fg_04.lo 00:02:40.371 CCAS igzip/igzip_set_long_icf_fg_06.lo 00:02:40.371 CCAS mem/mem_zero_detect_avx.lo 00:02:40.371 CCAS mem/mem_zero_detect_sse.lo 00:02:40.371 CCAS mem/mem_multibinary.lo 00:02:44.560 CCLD libisal.la 00:02:44.818 CCLD programs/igzip 00:02:45.077 rm erasure_code/gf_5vect_dot_prod_avx512.s erasure_code/gf_3vect_mad_avx.s erasure_code/gf_5vect_dot_prod_avx2.s erasure_code/gf_6vect_dot_prod_avx.s crc/crc16_t10dif_01.s crc/crc32_iscsi_00.s erasure_code/gf_5vect_dot_prod_avx.s igzip/encode_df_04.s erasure_code/gf_6vect_mad_sse.s erasure_code/gf_4vect_dot_prod_sse.s erasure_code/gf_5vect_mad_avx512.s crc/crc16_t10dif_copy_by4.s erasure_code/gf_5vect_mad_avx2.s erasure_code/gf_vect_mad_avx2.s igzip/proc_heap.s erasure_code/gf_3vect_dot_prod_sse.s igzip/igzip_set_long_icf_fg_06.s crc/crc64_jones_refl_by8.s erasure_code/gf_vect_dot_prod_avx2.s igzip/encode_df_06.s crc/crc_multibinary.s erasure_code/gf_4vect_mad_avx512.s erasure_code/gf_2vect_mad_avx2.s erasure_code/gf_4vect_mad_avx.s igzip/igzip_set_long_icf_fg_04.s crc/crc64_iso_refl_by8.s crc/crc16_t10dif_by16_10.s erasure_code/gf_2vect_dot_prod_avx2.s igzip/igzip_gen_icf_map_lh1_04.s raid/xor_check_sse.s erasure_code/gf_5vect_mad_avx.s raid/pq_gen_sse.s erasure_code/gf_vect_mad_avx.s erasure_code/gf_5vect_dot_prod_sse.s erasure_code/ec_multibinary.s crc/crc64_iso_norm_by16_10.s igzip/rfc1951_lookup.s raid/pq_gen_avx2.s erasure_code/gf_6vect_mad_avx.s crc/crc32_gzip_refl_by8.s igzip/igzip_gen_icf_map_lh1_06.s erasure_code/gf_3vect_dot_prod_avx2.s erasure_code/gf_2vect_mad_avx512.s igzip/igzip_update_histogram_04.s crc/crc64_ecma_norm_by16_10.s crc/crc32_ieee_by4.s erasure_code/gf_4vect_dot_prod_avx.s crc/crc16_t10dif_02.s erasure_code/gf_2vect_mad_sse.s raid/xor_gen_sse.s erasure_code/gf_5vect_mad_sse.s erasure_code/gf_3vect_dot_prod_avx512.s erasure_code/gf_3vect_mad_avx512.s raid/pq_gen_avx.s erasure_code/gf_2vect_dot_prod_sse.s igzip/igzip_multibinary.s igzip/igzip_deflate_hash.s erasure_code/gf_vect_mad_avx512.s raid/pq_gen_avx512.s igzip/adler32_sse.s crc/crc32_iscsi_01.s crc/crc16_t10dif_by4.s erasure_code/gf_6vect_dot_prod_avx2.s crc/crc32_gzip_refl_by16_10.s raid/xor_gen_avx512.s erasure_code/gf_vect_dot_prod_avx.s igzip/igzip_icf_finish.s erasure_code/gf_vect_mad_sse.s erasure_code/gf_vect_mul_sse.s erasure_code/gf_6vect_mad_avx512.s igzip/igzip_decode_block_stateless_04.s erasure_code/gf_6vect_mad_avx2.s crc/crc64_ecma_refl_by16_10.s raid/xor_gen_avx.s erasure_code/gf_6vect_dot_prod_avx512.s erasure_code/gf_2vect_mad_avx.s erasure_code/gf_2vect_dot_prod_avx512.s crc/crc32_ieee_by16_10.s crc/crc64_iso_refl_by16_10.s erasure_code/gf_3vect_mad_sse.s raid/pq_check_sse.s erasure_code/gf_2vect_dot_prod_avx.s mem/mem_zero_detect_avx.s crc/crc32_ieee_01.s crc/crc64_jones_refl_by16_10.s crc/crc64_multibinary.s mem/mem_multibinary.s raid/raid_multibinary.s erasure_code/gf_3vect_dot_prod_avx.s crc/crc32_ieee_02.s mem/mem_zero_detect_sse.s igzip/igzip_decode_block_stateless_01.s erasure_code/gf_4vect_dot_prod_avx2.s crc/crc32_gzip_refl_by8_02.s igzip/igzip_finish.s erasure_code/gf_4vect_mad_avx2.s crc/crc16_t10dif_copy_by4_02.s erasure_code/gf_vect_dot_prod_sse.s erasure_code/gf_3vect_mad_avx2.s erasure_code/gf_vect_mul_avx.s igzip/adler32_avx2_4.s erasure_code/gf_4vect_mad_sse.s igzip/igzip_inflate_multibinary.s crc/crc64_ecma_norm_by8.s igzip/igzip_body.s erasure_code/gf_6vect_dot_prod_sse.s crc/crc64_jones_norm_by16_10.s crc/crc64_iso_norm_by8.s crc/crc64_jones_norm_by8.s erasure_code/gf_4vect_dot_prod_avx512.s crc/crc64_ecma_refl_by8.s igzip/igzip_update_histogram_01.s igzip/igzip_icf_body_h1_gr_bt.s erasure_code/gf_vect_dot_prod_avx512.s 00:02:45.077 02:08:35 build_native_dpdk -- common/autobuild_common.sh@148 -- $ make install 00:02:45.077 make --no-print-directory install-am 00:02:45.336 help2man -o programs/igzip.1 -i programs/igzip.1.h2m -N ./programs/igzip 00:02:45.595 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib' 00:02:45.595 /bin/sh ./libtool --mode=install /usr/bin/install -c libisal.la '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib' 00:02:45.595 libtool: install: /usr/bin/install -c .libs/libisal.so.2.0.29 /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.so.2.0.29 00:02:45.595 libtool: install: (cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib && { ln -s -f libisal.so.2.0.29 libisal.so.2 || { rm -f libisal.so.2 && ln -s libisal.so.2.0.29 libisal.so.2; }; }) 00:02:45.595 libtool: install: (cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib && { ln -s -f libisal.so.2.0.29 libisal.so || { rm -f libisal.so && ln -s libisal.so.2.0.29 libisal.so; }; }) 00:02:45.595 libtool: install: /usr/bin/install -c .libs/libisal.lai /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.la 00:02:45.595 libtool: install: /usr/bin/install -c .libs/libisal.a /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.a 00:02:45.595 libtool: install: chmod 644 /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.a 00:02:45.595 libtool: install: ranlib /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.a 00:02:45.854 libtool: finish: PATH="/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin:/sbin" ldconfig -n /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:02:45.854 ---------------------------------------------------------------------- 00:02:45.854 Libraries have been installed in: 00:02:45.854 /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:02:45.854 00:02:45.854 If you ever happen to want to link against installed libraries 00:02:45.854 in a given directory, LIBDIR, you must either use libtool, and 00:02:45.854 specify the full pathname of the library, or use the '-LLIBDIR' 00:02:45.854 flag during linking and do at least one of the following: 00:02:45.854 - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable 00:02:45.854 during execution 00:02:45.854 - add LIBDIR to the 'LD_RUN_PATH' environment variable 00:02:45.854 during linking 00:02:45.854 - use the '-Wl,-rpath -Wl,LIBDIR' linker flag 00:02:45.854 - have your system administrator add LIBDIR to '/etc/ld.so.conf' 00:02:45.854 00:02:45.854 See any operating system documentation about shared libraries for 00:02:45.854 more information, such as the ld(1) and ld.so(8) manual pages. 00:02:45.854 ---------------------------------------------------------------------- 00:02:45.854 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/bin' 00:02:45.854 /bin/sh ./libtool --mode=install /usr/bin/install -c programs/igzip '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/bin' 00:02:45.854 libtool: install: /usr/bin/install -c programs/.libs/igzip /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/bin/igzip 00:02:45.854 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/share/man/man1' 00:02:45.854 /usr/bin/install -c -m 644 programs/igzip.1 '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/share/man/man1' 00:02:45.854 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include' 00:02:45.854 /usr/bin/install -c -m 644 isa-l.h '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include/.' 00:02:46.114 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig' 00:02:46.114 /usr/bin/install -c -m 644 libisal.pc '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig' 00:02:46.114 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include/isa-l' 00:02:46.114 /usr/bin/install -c -m 644 include/test.h include/types.h include/crc.h include/crc64.h include/erasure_code.h include/gf_vect_mul.h include/igzip_lib.h include/mem_routines.h include/raid.h '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include/isa-l' 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@149 -- $ DPDK_DRIVERS+=("compress") 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@150 -- $ DPDK_DRIVERS+=("compress/isal") 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@151 -- $ DPDK_DRIVERS+=("compress/qat") 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@152 -- $ DPDK_DRIVERS+=("common/qat") 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@153 -- $ ge 22.11.4 21.02.0 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '>=' 21.02.0 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=>=' 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@364 -- $ return 0 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@156 -- $ test y = n 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@161 -- $ DPDK_DRIVERS+=("compress/mlx5") 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@163 -- $ export PKG_CONFIG_PATH=:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@163 -- $ PKG_CONFIG_PATH=:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@164 -- $ export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@164 -- $ LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/dpdk 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:46.114 02:08:36 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:46.114 patching file config/rte_config.h 00:02:46.114 Hunk #1 succeeded at 60 (offset 1 line). 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base crypto crypto/ipsec_mb crypto/qat compress/qat common/qat bus/auxiliary common/mlx5 common/mlx5/linux crypto/mlx5 compress compress/isal compress/qat common/qat compress/mlx5 00:02:46.114 02:08:36 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false '-Dc_link_args= -L/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow -I/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,crypto,crypto/ipsec_mb,crypto/qat,compress/qat,common/qat,bus/auxiliary,common/mlx5,common/mlx5/linux,crypto/mlx5,compress,compress/isal,compress/qat,common/qat,compress/mlx5, 00:02:52.686 The Meson build system 00:02:52.686 Version: 1.3.1 00:02:52.686 Source dir: /var/jenkins/workspace/crypto-phy-autotest/dpdk 00:02:52.686 Build dir: /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp 00:02:52.686 Build type: native build 00:02:52.686 Program cat found: YES (/usr/bin/cat) 00:02:52.686 Project name: DPDK 00:02:52.686 Project version: 22.11.4 00:02:52.686 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:52.686 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:52.686 Host machine cpu family: x86_64 00:02:52.686 Host machine cpu: x86_64 00:02:52.686 Message: ## Building in Developer Mode ## 00:02:52.686 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.686 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:52.686 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.686 Program objdump found: YES (/usr/bin/objdump) 00:02:52.686 Program python3 found: YES (/usr/bin/python3) 00:02:52.686 Program cat found: YES (/usr/bin/cat) 00:02:52.686 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:52.686 Checking for size of "void *" : 8 00:02:52.686 Checking for size of "void *" : 8 (cached) 00:02:52.686 Library m found: YES 00:02:52.686 Library numa found: YES 00:02:52.686 Has header "numaif.h" : YES 00:02:52.686 Library fdt found: NO 00:02:52.686 Library execinfo found: NO 00:02:52.686 Has header "execinfo.h" : YES 00:02:52.686 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:52.686 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.686 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.686 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.686 Run-time dependency openssl found: YES 3.0.9 00:02:52.686 Run-time dependency libpcap found: YES 1.10.4 00:02:52.686 Has header "pcap.h" with dependency libpcap: YES 00:02:52.686 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.686 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.686 Compiler for C supports arguments -Wformat: YES 00:02:52.686 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.686 Compiler for C supports arguments -Wformat-security: NO 00:02:52.686 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.686 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.686 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.686 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.686 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.686 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.686 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.686 Compiler for C supports arguments -Wundef: YES 00:02:52.686 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.686 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.686 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.686 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.686 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.686 Compiler for C supports arguments -mavx512f: YES 00:02:52.686 Checking if "AVX512 checking" compiles: YES 00:02:52.686 Fetching value of define "__SSE4_2__" : 1 00:02:52.686 Fetching value of define "__AES__" : 1 00:02:52.686 Fetching value of define "__AVX__" : 1 00:02:52.686 Fetching value of define "__AVX2__" : 1 00:02:52.686 Fetching value of define "__AVX512BW__" : 1 00:02:52.686 Fetching value of define "__AVX512CD__" : 1 00:02:52.686 Fetching value of define "__AVX512DQ__" : 1 00:02:52.686 Fetching value of define "__AVX512F__" : 1 00:02:52.686 Fetching value of define "__AVX512VL__" : 1 00:02:52.686 Fetching value of define "__PCLMUL__" : 1 00:02:52.686 Fetching value of define "__RDRND__" : 1 00:02:52.686 Fetching value of define "__RDSEED__" : 1 00:02:52.686 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.686 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.686 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.686 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.686 Checking for function "getentropy" : YES 00:02:52.686 Message: lib/eal: Defining dependency "eal" 00:02:52.686 Message: lib/ring: Defining dependency "ring" 00:02:52.686 Message: lib/rcu: Defining dependency "rcu" 00:02:52.686 Message: lib/mempool: Defining dependency "mempool" 00:02:52.686 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.686 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.686 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.686 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.686 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.686 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.686 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:52.686 Compiler for C supports arguments -mpclmul: YES 00:02:52.686 Compiler for C supports arguments -maes: YES 00:02:52.686 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.686 Compiler for C supports arguments -mavx512bw: YES 00:02:52.686 Compiler for C supports arguments -mavx512dq: YES 00:02:52.686 Compiler for C supports arguments -mavx512vl: YES 00:02:52.686 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.686 Compiler for C supports arguments -mavx2: YES 00:02:52.686 Compiler for C supports arguments -mavx: YES 00:02:52.686 Message: lib/net: Defining dependency "net" 00:02:52.687 Message: lib/meter: Defining dependency "meter" 00:02:52.687 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.687 Message: lib/pci: Defining dependency "pci" 00:02:52.687 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.687 Message: lib/metrics: Defining dependency "metrics" 00:02:52.687 Message: lib/hash: Defining dependency "hash" 00:02:52.687 Message: lib/timer: Defining dependency "timer" 00:02:52.687 Fetching value of define "__AVX2__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.687 Message: lib/acl: Defining dependency "acl" 00:02:52.687 Message: lib/bbdev: Defining dependency "bbdev" 00:02:52.687 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:52.687 Run-time dependency libelf found: YES 0.190 00:02:52.687 Message: lib/bpf: Defining dependency "bpf" 00:02:52.687 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:52.687 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.687 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.687 Message: lib/distributor: Defining dependency "distributor" 00:02:52.687 Message: lib/efd: Defining dependency "efd" 00:02:52.687 Message: lib/eventdev: Defining dependency "eventdev" 00:02:52.687 Message: lib/gpudev: Defining dependency "gpudev" 00:02:52.687 Message: lib/gro: Defining dependency "gro" 00:02:52.687 Message: lib/gso: Defining dependency "gso" 00:02:52.687 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:52.687 Message: lib/jobstats: Defining dependency "jobstats" 00:02:52.687 Message: lib/latencystats: Defining dependency "latencystats" 00:02:52.687 Message: lib/lpm: Defining dependency "lpm" 00:02:52.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:52.687 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:52.687 Message: lib/member: Defining dependency "member" 00:02:52.687 Message: lib/pcapng: Defining dependency "pcapng" 00:02:52.687 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.687 Message: lib/power: Defining dependency "power" 00:02:52.687 Message: lib/rawdev: Defining dependency "rawdev" 00:02:52.687 Message: lib/regexdev: Defining dependency "regexdev" 00:02:52.687 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.687 Message: lib/rib: Defining dependency "rib" 00:02:52.687 Message: lib/reorder: Defining dependency "reorder" 00:02:52.687 Message: lib/sched: Defining dependency "sched" 00:02:52.687 Message: lib/security: Defining dependency "security" 00:02:52.687 Message: lib/stack: Defining dependency "stack" 00:02:52.687 Has header "linux/userfaultfd.h" : YES 00:02:52.687 Message: lib/vhost: Defining dependency "vhost" 00:02:52.687 Message: lib/ipsec: Defining dependency "ipsec" 00:02:52.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.687 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.687 Message: lib/fib: Defining dependency "fib" 00:02:52.687 Message: lib/port: Defining dependency "port" 00:02:52.687 Message: lib/pdump: Defining dependency "pdump" 00:02:52.687 Message: lib/table: Defining dependency "table" 00:02:52.687 Message: lib/pipeline: Defining dependency "pipeline" 00:02:52.687 Message: lib/graph: Defining dependency "graph" 00:02:52.687 Message: lib/node: Defining dependency "node" 00:02:52.687 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.687 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:02:52.687 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.687 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.687 Compiler for C supports arguments -std=c11: YES 00:02:52.687 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:02:52.687 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:02:52.687 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:02:52.687 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:02:56.881 Run-time dependency libmlx5 found: YES 1.24.46.0 00:02:56.881 Run-time dependency libibverbs found: YES 1.14.46.0 00:02:56.881 Library mtcr_ul found: NO 00:02:56.881 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_25000baseCR_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:02:56.881 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:02:59.417 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:02:59.417 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:02:59.417 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:02:59.417 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:02:59.417 Configuring mlx5_autoconf.h using configuration 00:02:59.417 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:02:59.417 Run-time dependency libcrypto found: YES 3.0.9 00:02:59.417 Library IPSec_MB found: YES 00:02:59.417 Dependency libcrypto found: YES 3.0.9 (cached) 00:02:59.417 Fetching value of define "IMB_VERSION_STR" : "1.0.0" 00:02:59.417 Message: drivers/common/qat: Defining dependency "common_qat" 00:02:59.417 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.417 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:59.417 Compiler for C supports arguments -Wno-unused-value: YES 00:02:59.417 Compiler for C supports arguments -Wno-format: YES 00:02:59.417 Compiler for C supports arguments -Wno-format-security: YES 00:02:59.417 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:59.417 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:59.417 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:59.417 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:59.417 Fetching value of define "__AVX2__" : 1 (cached) 00:02:59.417 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.417 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.417 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.417 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:59.417 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:59.417 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:59.417 Library IPSec_MB found: YES 00:02:59.417 Fetching value of define "IMB_VERSION_STR" : "1.0.0" (cached) 00:02:59.417 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:02:59.417 Compiler for C supports arguments -std=c11: YES (cached) 00:02:59.417 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:59.417 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:59.417 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:59.417 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:59.417 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:02:59.417 Run-time dependency libisal found: YES 2.29.0 00:02:59.417 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:02:59.417 Compiler for C supports arguments -std=c11: YES (cached) 00:02:59.417 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:59.417 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:59.417 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:59.417 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:59.417 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:02:59.417 Program doxygen found: YES (/usr/bin/doxygen) 00:02:59.417 Configuring doxy-api.conf using configuration 00:02:59.417 Program sphinx-build found: NO 00:02:59.417 Configuring rte_build_config.h using configuration 00:02:59.417 Message: 00:02:59.417 ================= 00:02:59.417 Applications Enabled 00:02:59.417 ================= 00:02:59.417 00:02:59.417 apps: 00:02:59.417 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:59.417 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:59.417 test-security-perf, 00:02:59.417 00:02:59.417 Message: 00:02:59.417 ================= 00:02:59.417 Libraries Enabled 00:02:59.417 ================= 00:02:59.417 00:02:59.417 libs: 00:02:59.417 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:59.417 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:59.417 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:59.417 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:59.417 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:59.417 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:59.417 table, pipeline, graph, node, 00:02:59.417 00:02:59.417 Message: 00:02:59.417 =============== 00:02:59.417 Drivers Enabled 00:02:59.417 =============== 00:02:59.417 00:02:59.417 common: 00:02:59.417 mlx5, qat, 00:02:59.417 bus: 00:02:59.417 auxiliary, pci, vdev, 00:02:59.417 mempool: 00:02:59.417 ring, 00:02:59.417 dma: 00:02:59.417 00:02:59.417 net: 00:02:59.417 i40e, 00:02:59.417 raw: 00:02:59.417 00:02:59.417 crypto: 00:02:59.417 ipsec_mb, mlx5, 00:02:59.417 compress: 00:02:59.417 isal, mlx5, 00:02:59.417 regex: 00:02:59.417 00:02:59.417 vdpa: 00:02:59.417 00:02:59.417 event: 00:02:59.417 00:02:59.417 baseband: 00:02:59.417 00:02:59.417 gpu: 00:02:59.417 00:02:59.417 00:02:59.417 Message: 00:02:59.417 ================= 00:02:59.417 Content Skipped 00:02:59.417 ================= 00:02:59.417 00:02:59.417 apps: 00:02:59.417 00:02:59.417 libs: 00:02:59.417 kni: explicitly disabled via build config (deprecated lib) 00:02:59.417 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:59.417 00:02:59.417 drivers: 00:02:59.417 common/cpt: not in enabled drivers build config 00:02:59.417 common/dpaax: not in enabled drivers build config 00:02:59.417 common/iavf: not in enabled drivers build config 00:02:59.417 common/idpf: not in enabled drivers build config 00:02:59.417 common/mvep: not in enabled drivers build config 00:02:59.417 common/octeontx: not in enabled drivers build config 00:02:59.418 bus/dpaa: not in enabled drivers build config 00:02:59.418 bus/fslmc: not in enabled drivers build config 00:02:59.418 bus/ifpga: not in enabled drivers build config 00:02:59.418 bus/vmbus: not in enabled drivers build config 00:02:59.418 common/cnxk: not in enabled drivers build config 00:02:59.418 common/sfc_efx: not in enabled drivers build config 00:02:59.418 mempool/bucket: not in enabled drivers build config 00:02:59.418 mempool/cnxk: not in enabled drivers build config 00:02:59.418 mempool/dpaa: not in enabled drivers build config 00:02:59.418 mempool/dpaa2: not in enabled drivers build config 00:02:59.418 mempool/octeontx: not in enabled drivers build config 00:02:59.418 mempool/stack: not in enabled drivers build config 00:02:59.418 dma/cnxk: not in enabled drivers build config 00:02:59.418 dma/dpaa: not in enabled drivers build config 00:02:59.418 dma/dpaa2: not in enabled drivers build config 00:02:59.418 dma/hisilicon: not in enabled drivers build config 00:02:59.418 dma/idxd: not in enabled drivers build config 00:02:59.418 dma/ioat: not in enabled drivers build config 00:02:59.418 dma/skeleton: not in enabled drivers build config 00:02:59.418 net/af_packet: not in enabled drivers build config 00:02:59.418 net/af_xdp: not in enabled drivers build config 00:02:59.418 net/ark: not in enabled drivers build config 00:02:59.418 net/atlantic: not in enabled drivers build config 00:02:59.418 net/avp: not in enabled drivers build config 00:02:59.418 net/axgbe: not in enabled drivers build config 00:02:59.418 net/bnx2x: not in enabled drivers build config 00:02:59.418 net/bnxt: not in enabled drivers build config 00:02:59.418 net/bonding: not in enabled drivers build config 00:02:59.418 net/cnxk: not in enabled drivers build config 00:02:59.418 net/cxgbe: not in enabled drivers build config 00:02:59.418 net/dpaa: not in enabled drivers build config 00:02:59.418 net/dpaa2: not in enabled drivers build config 00:02:59.418 net/e1000: not in enabled drivers build config 00:02:59.418 net/ena: not in enabled drivers build config 00:02:59.418 net/enetc: not in enabled drivers build config 00:02:59.418 net/enetfec: not in enabled drivers build config 00:02:59.418 net/enic: not in enabled drivers build config 00:02:59.418 net/failsafe: not in enabled drivers build config 00:02:59.418 net/fm10k: not in enabled drivers build config 00:02:59.418 net/gve: not in enabled drivers build config 00:02:59.418 net/hinic: not in enabled drivers build config 00:02:59.418 net/hns3: not in enabled drivers build config 00:02:59.418 net/iavf: not in enabled drivers build config 00:02:59.418 net/ice: not in enabled drivers build config 00:02:59.418 net/idpf: not in enabled drivers build config 00:02:59.418 net/igc: not in enabled drivers build config 00:02:59.418 net/ionic: not in enabled drivers build config 00:02:59.418 net/ipn3ke: not in enabled drivers build config 00:02:59.418 net/ixgbe: not in enabled drivers build config 00:02:59.418 net/kni: not in enabled drivers build config 00:02:59.418 net/liquidio: not in enabled drivers build config 00:02:59.418 net/mana: not in enabled drivers build config 00:02:59.418 net/memif: not in enabled drivers build config 00:02:59.418 net/mlx4: not in enabled drivers build config 00:02:59.418 net/mlx5: not in enabled drivers build config 00:02:59.418 net/mvneta: not in enabled drivers build config 00:02:59.418 net/mvpp2: not in enabled drivers build config 00:02:59.418 net/netvsc: not in enabled drivers build config 00:02:59.418 net/nfb: not in enabled drivers build config 00:02:59.418 net/nfp: not in enabled drivers build config 00:02:59.418 net/ngbe: not in enabled drivers build config 00:02:59.418 net/null: not in enabled drivers build config 00:02:59.418 net/octeontx: not in enabled drivers build config 00:02:59.418 net/octeon_ep: not in enabled drivers build config 00:02:59.418 net/pcap: not in enabled drivers build config 00:02:59.418 net/pfe: not in enabled drivers build config 00:02:59.418 net/qede: not in enabled drivers build config 00:02:59.418 net/ring: not in enabled drivers build config 00:02:59.418 net/sfc: not in enabled drivers build config 00:02:59.418 net/softnic: not in enabled drivers build config 00:02:59.418 net/tap: not in enabled drivers build config 00:02:59.418 net/thunderx: not in enabled drivers build config 00:02:59.418 net/txgbe: not in enabled drivers build config 00:02:59.418 net/vdev_netvsc: not in enabled drivers build config 00:02:59.418 net/vhost: not in enabled drivers build config 00:02:59.418 net/virtio: not in enabled drivers build config 00:02:59.418 net/vmxnet3: not in enabled drivers build config 00:02:59.418 raw/cnxk_bphy: not in enabled drivers build config 00:02:59.418 raw/cnxk_gpio: not in enabled drivers build config 00:02:59.418 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:59.418 raw/ifpga: not in enabled drivers build config 00:02:59.418 raw/ntb: not in enabled drivers build config 00:02:59.418 raw/skeleton: not in enabled drivers build config 00:02:59.418 crypto/armv8: not in enabled drivers build config 00:02:59.418 crypto/bcmfs: not in enabled drivers build config 00:02:59.418 crypto/caam_jr: not in enabled drivers build config 00:02:59.418 crypto/ccp: not in enabled drivers build config 00:02:59.418 crypto/cnxk: not in enabled drivers build config 00:02:59.418 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.418 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.418 crypto/mvsam: not in enabled drivers build config 00:02:59.418 crypto/nitrox: not in enabled drivers build config 00:02:59.418 crypto/null: not in enabled drivers build config 00:02:59.418 crypto/octeontx: not in enabled drivers build config 00:02:59.418 crypto/openssl: not in enabled drivers build config 00:02:59.418 crypto/scheduler: not in enabled drivers build config 00:02:59.418 crypto/uadk: not in enabled drivers build config 00:02:59.418 crypto/virtio: not in enabled drivers build config 00:02:59.418 compress/octeontx: not in enabled drivers build config 00:02:59.418 compress/zlib: not in enabled drivers build config 00:02:59.418 regex/mlx5: not in enabled drivers build config 00:02:59.418 regex/cn9k: not in enabled drivers build config 00:02:59.418 vdpa/ifc: not in enabled drivers build config 00:02:59.418 vdpa/mlx5: not in enabled drivers build config 00:02:59.418 vdpa/sfc: not in enabled drivers build config 00:02:59.418 event/cnxk: not in enabled drivers build config 00:02:59.418 event/dlb2: not in enabled drivers build config 00:02:59.418 event/dpaa: not in enabled drivers build config 00:02:59.418 event/dpaa2: not in enabled drivers build config 00:02:59.418 event/dsw: not in enabled drivers build config 00:02:59.418 event/opdl: not in enabled drivers build config 00:02:59.418 event/skeleton: not in enabled drivers build config 00:02:59.418 event/sw: not in enabled drivers build config 00:02:59.418 event/octeontx: not in enabled drivers build config 00:02:59.418 baseband/acc: not in enabled drivers build config 00:02:59.418 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:59.418 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:59.418 baseband/la12xx: not in enabled drivers build config 00:02:59.418 baseband/null: not in enabled drivers build config 00:02:59.418 baseband/turbo_sw: not in enabled drivers build config 00:02:59.418 gpu/cuda: not in enabled drivers build config 00:02:59.418 00:02:59.418 00:02:59.418 Build targets in project: 355 00:02:59.418 00:02:59.418 DPDK 22.11.4 00:02:59.418 00:02:59.418 User defined options 00:02:59.418 libdir : lib 00:02:59.418 prefix : /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:02:59.418 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow -I/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:02:59.418 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:02:59.418 enable_docs : false 00:02:59.418 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,crypto,crypto/ipsec_mb,crypto/qat,compress/qat,common/qat,bus/auxiliary,common/mlx5,common/mlx5/linux,crypto/mlx5,compress,compress/isal,compress/qat,common/qat,compress/mlx5, 00:02:59.418 enable_kmods : false 00:02:59.418 machine : native 00:02:59.418 tests : false 00:02:59.418 00:02:59.418 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.418 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:59.684 02:08:49 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp -j72 00:02:59.684 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp' 00:02:59.684 [1/854] Generating lib/rte_kvargs_mingw with a custom command 00:02:59.684 [2/854] Generating lib/rte_kvargs_def with a custom command 00:02:59.684 [3/854] Generating lib/rte_telemetry_mingw with a custom command 00:02:59.684 [4/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:59.684 [5/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:59.946 [6/854] Generating lib/rte_telemetry_def with a custom command 00:02:59.946 [7/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:59.946 [8/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.946 [9/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:59.946 [10/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.946 [11/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:59.946 [12/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:59.946 [13/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.946 [14/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:59.946 [15/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:59.946 [16/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:59.946 [17/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:59.946 [18/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:59.946 [19/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:59.946 [20/854] Generating lib/rte_ring_mingw with a custom command 00:02:59.946 [21/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:59.946 [22/854] Generating lib/rte_ring_def with a custom command 00:02:59.946 [23/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.946 [24/854] Generating lib/rte_eal_mingw with a custom command 00:02:59.946 [25/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:59.946 [26/854] Generating lib/rte_eal_def with a custom command 00:02:59.946 [27/854] Generating lib/rte_rcu_def with a custom command 00:02:59.946 [28/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:59.946 [29/854] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.946 [30/854] Generating lib/rte_rcu_mingw with a custom command 00:02:59.946 [31/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:59.946 [32/854] Generating lib/rte_mempool_def with a custom command 00:02:59.946 [33/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.946 [34/854] Generating lib/rte_mempool_mingw with a custom command 00:02:59.946 [35/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:59.946 [36/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:59.946 [37/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.946 [38/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:59.946 [39/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:59.946 [40/854] Generating lib/rte_mbuf_mingw with a custom command 00:02:59.946 [41/854] Generating lib/rte_mbuf_def with a custom command 00:02:59.946 [42/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:59.946 [43/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:59.946 [44/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:59.946 [45/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:59.946 [46/854] Generating lib/rte_net_def with a custom command 00:02:59.946 [47/854] Generating lib/rte_meter_def with a custom command 00:02:59.946 [48/854] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:59.946 [49/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.946 [50/854] Generating lib/rte_meter_mingw with a custom command 00:02:59.946 [51/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.946 [52/854] Generating lib/rte_net_mingw with a custom command 00:02:59.946 [53/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:59.946 [54/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:59.946 [55/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.207 [56/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:00.207 [57/854] Generating lib/rte_ethdev_mingw with a custom command 00:03:00.207 [58/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:00.207 [59/854] Generating lib/rte_ethdev_def with a custom command 00:03:00.207 [60/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.207 [61/854] Generating lib/rte_pci_mingw with a custom command 00:03:00.207 [62/854] Generating lib/rte_pci_def with a custom command 00:03:00.207 [63/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:00.207 [64/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:00.207 [65/854] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:00.207 [66/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:00.207 [67/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:00.207 [68/854] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:00.207 [69/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:00.207 [70/854] Linking static target lib/librte_kvargs.a 00:03:00.207 [71/854] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:00.207 [72/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:00.207 [73/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.207 [74/854] Linking static target lib/librte_ring.a 00:03:00.207 [75/854] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:00.207 [76/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.207 [77/854] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:00.207 [78/854] Generating lib/rte_cmdline_mingw with a custom command 00:03:00.207 [79/854] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:00.207 [80/854] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.207 [81/854] Linking static target lib/librte_pci.a 00:03:00.207 [82/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:00.207 [83/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:00.207 [84/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:00.207 [85/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:00.207 [86/854] Linking static target lib/librte_meter.a 00:03:00.207 [87/854] Generating lib/rte_cmdline_def with a custom command 00:03:00.207 [88/854] Generating lib/rte_metrics_def with a custom command 00:03:00.207 [89/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:00.207 [90/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:00.207 [91/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:00.207 [92/854] Generating lib/rte_metrics_mingw with a custom command 00:03:00.207 [93/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:00.207 [94/854] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:00.207 [95/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:00.207 [96/854] Generating lib/rte_timer_mingw with a custom command 00:03:00.207 [97/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:00.207 [98/854] Generating lib/rte_hash_mingw with a custom command 00:03:00.468 [99/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:00.468 [100/854] Generating lib/rte_timer_def with a custom command 00:03:00.468 [101/854] Generating lib/rte_hash_def with a custom command 00:03:00.469 [102/854] Generating lib/rte_acl_mingw with a custom command 00:03:00.469 [103/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:00.469 [104/854] Generating lib/rte_acl_def with a custom command 00:03:00.469 [105/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:00.469 [106/854] Generating lib/rte_bitratestats_def with a custom command 00:03:00.469 [107/854] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:00.469 [108/854] Generating lib/rte_bbdev_def with a custom command 00:03:00.469 [109/854] Generating lib/rte_bbdev_mingw with a custom command 00:03:00.469 [110/854] Generating lib/rte_bitratestats_mingw with a custom command 00:03:00.469 [111/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:00.469 [112/854] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.469 [113/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:00.469 [114/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.469 [115/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:00.469 [116/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:00.469 [117/854] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.469 [118/854] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.469 [119/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:00.730 [120/854] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:00.730 [121/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:00.730 [122/854] Generating lib/rte_bpf_def with a custom command 00:03:00.730 [123/854] Generating lib/rte_bpf_mingw with a custom command 00:03:00.730 [124/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.730 [125/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.730 [126/854] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.730 [127/854] Generating lib/rte_cfgfile_def with a custom command 00:03:00.730 [128/854] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.730 [129/854] Generating lib/rte_cfgfile_mingw with a custom command 00:03:00.730 [130/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:00.730 [131/854] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:00.730 [132/854] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.730 [133/854] Generating lib/rte_compressdev_def with a custom command 00:03:00.730 [134/854] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.730 [135/854] Generating lib/rte_compressdev_mingw with a custom command 00:03:00.730 [136/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.730 [137/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:00.730 [138/854] Generating lib/rte_cryptodev_def with a custom command 00:03:00.730 [139/854] Generating lib/rte_cryptodev_mingw with a custom command 00:03:00.730 [140/854] Linking target lib/librte_kvargs.so.23.0 00:03:00.730 [141/854] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.730 [142/854] Generating lib/rte_distributor_def with a custom command 00:03:00.730 [143/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:00.730 [144/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.730 [145/854] Generating lib/rte_distributor_mingw with a custom command 00:03:00.730 [146/854] Generating lib/rte_efd_def with a custom command 00:03:00.730 [147/854] Generating lib/rte_efd_mingw with a custom command 00:03:00.730 [148/854] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.730 [149/854] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.730 [150/854] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.730 [151/854] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.730 [152/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.730 [153/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:00.730 [154/854] Linking static target lib/librte_telemetry.a 00:03:00.993 [155/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.993 [156/854] Generating lib/rte_eventdev_def with a custom command 00:03:00.993 [157/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.993 [158/854] Generating lib/rte_eventdev_mingw with a custom command 00:03:00.993 [159/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.993 [160/854] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:00.993 [161/854] Generating lib/rte_gpudev_def with a custom command 00:03:00.993 [162/854] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:00.993 [163/854] Generating lib/rte_gpudev_mingw with a custom command 00:03:00.993 [164/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.993 [165/854] Generating lib/rte_gro_def with a custom command 00:03:00.993 [166/854] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.993 [167/854] Linking static target lib/librte_cmdline.a 00:03:00.993 [168/854] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:00.993 [169/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:00.993 [170/854] Linking static target lib/librte_net.a 00:03:00.993 [171/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.993 [172/854] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:00.993 [173/854] Generating lib/rte_gro_mingw with a custom command 00:03:00.993 [174/854] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:00.993 [175/854] Linking static target lib/librte_metrics.a 00:03:00.993 [176/854] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:00.993 [177/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.993 [178/854] Linking static target lib/librte_cfgfile.a 00:03:00.993 [179/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.993 [180/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:00.993 [181/854] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.993 [182/854] Generating lib/rte_gso_def with a custom command 00:03:00.993 [183/854] Linking static target lib/librte_timer.a 00:03:00.993 [184/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.993 [185/854] Generating lib/rte_gso_mingw with a custom command 00:03:01.255 [186/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:01.255 [187/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:01.255 [188/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.255 [189/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:01.255 [190/854] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:01.255 [191/854] Generating lib/rte_ip_frag_def with a custom command 00:03:01.255 [192/854] Generating lib/rte_ip_frag_mingw with a custom command 00:03:01.255 [193/854] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:01.255 [194/854] Linking static target lib/librte_bitratestats.a 00:03:01.255 [195/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:01.255 [196/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.255 [197/854] Generating lib/rte_jobstats_def with a custom command 00:03:01.255 [198/854] Generating lib/rte_jobstats_mingw with a custom command 00:03:01.255 [199/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.255 [200/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:01.255 [201/854] Generating lib/rte_latencystats_def with a custom command 00:03:01.255 [202/854] Generating lib/rte_latencystats_mingw with a custom command 00:03:01.255 [203/854] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.520 [204/854] Generating lib/rte_lpm_def with a custom command 00:03:01.520 [205/854] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:01.520 [206/854] Linking static target lib/librte_rcu.a 00:03:01.520 [207/854] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:01.520 [208/854] Generating lib/rte_lpm_mingw with a custom command 00:03:01.520 [209/854] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.520 [210/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:01.520 [211/854] Linking static target lib/librte_mempool.a 00:03:01.520 [212/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:01.520 [213/854] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:01.520 [214/854] Linking static target lib/librte_jobstats.a 00:03:01.520 [215/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:01.520 [216/854] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:01.520 [217/854] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:01.520 [218/854] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:01.520 [219/854] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:01.520 [220/854] Generating lib/rte_member_def with a custom command 00:03:01.520 [221/854] Linking target lib/librte_telemetry.so.23.0 00:03:01.520 [222/854] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:01.520 [223/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:01.520 [224/854] Generating lib/rte_member_mingw with a custom command 00:03:01.520 [225/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:01.520 [226/854] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:01.520 [227/854] Generating lib/rte_pcapng_mingw with a custom command 00:03:01.520 [228/854] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:01.520 [229/854] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.520 [230/854] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.520 [231/854] Generating lib/rte_pcapng_def with a custom command 00:03:01.780 [232/854] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.780 [233/854] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.780 [234/854] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.780 [235/854] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.780 [236/854] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:01.780 [237/854] Linking static target lib/librte_compressdev.a 00:03:01.780 [238/854] Generating lib/rte_power_mingw with a custom command 00:03:01.780 [239/854] Generating lib/rte_power_def with a custom command 00:03:01.780 [240/854] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:01.780 [241/854] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.780 [242/854] Generating lib/rte_rawdev_def with a custom command 00:03:01.780 [243/854] Generating lib/rte_rawdev_mingw with a custom command 00:03:01.780 [244/854] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:01.780 [245/854] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:01.780 [246/854] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:01.780 [247/854] Generating lib/rte_regexdev_def with a custom command 00:03:01.780 [248/854] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:01.780 [249/854] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:01.780 [250/854] Generating lib/rte_dmadev_def with a custom command 00:03:01.780 [251/854] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:01.780 [252/854] Generating lib/rte_regexdev_mingw with a custom command 00:03:01.780 [253/854] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:01.780 [254/854] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:01.780 [255/854] Generating lib/rte_rib_def with a custom command 00:03:01.780 [256/854] Generating lib/rte_reorder_def with a custom command 00:03:01.780 [257/854] Generating lib/rte_reorder_mingw with a custom command 00:03:01.780 [258/854] Generating lib/rte_dmadev_mingw with a custom command 00:03:01.780 [259/854] Generating lib/rte_rib_mingw with a custom command 00:03:01.780 [260/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:01.780 [261/854] Generating lib/rte_sched_def with a custom command 00:03:02.042 [262/854] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:02.042 [263/854] Generating lib/rte_sched_mingw with a custom command 00:03:02.042 [264/854] Generating lib/rte_security_mingw with a custom command 00:03:02.042 [265/854] Linking static target lib/librte_eal.a 00:03:02.042 [266/854] Generating lib/rte_security_def with a custom command 00:03:02.042 [267/854] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:02.042 [268/854] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:02.042 [269/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:02.042 [270/854] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:02.042 [271/854] Generating lib/rte_stack_def with a custom command 00:03:02.042 [272/854] Generating lib/rte_stack_mingw with a custom command 00:03:02.042 [273/854] Linking static target lib/librte_gso.a 00:03:02.042 [274/854] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.042 [275/854] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:02.042 [276/854] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.042 [277/854] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:02.042 [278/854] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:02.042 [279/854] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:02.042 [280/854] Generating lib/rte_vhost_def with a custom command 00:03:02.042 [281/854] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:02.042 [282/854] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:02.042 [283/854] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:02.042 [284/854] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:02.042 [285/854] Linking static target lib/librte_stack.a 00:03:02.042 [286/854] Generating lib/rte_vhost_mingw with a custom command 00:03:02.042 [287/854] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:02.042 [288/854] Linking static target lib/librte_gpudev.a 00:03:02.042 [289/854] Linking static target lib/librte_gro.a 00:03:02.303 [290/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:02.303 [291/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:02.303 [292/854] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:02.303 [293/854] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.303 [294/854] Linking static target lib/librte_distributor.a 00:03:02.303 [295/854] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:02.303 [296/854] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:02.303 [297/854] Generating lib/rte_ipsec_def with a custom command 00:03:02.303 [298/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:02.303 [299/854] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:02.303 [300/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:02.303 [301/854] Generating lib/rte_ipsec_mingw with a custom command 00:03:02.303 [302/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:02.303 [303/854] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.303 [304/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:02.303 [305/854] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:02.303 [306/854] Linking static target lib/librte_mbuf.a 00:03:02.303 [307/854] Generating lib/rte_fib_def with a custom command 00:03:02.303 [308/854] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:02.303 [309/854] Linking static target lib/librte_rawdev.a 00:03:02.303 [310/854] Generating lib/rte_fib_mingw with a custom command 00:03:02.303 [311/854] Linking static target lib/librte_latencystats.a 00:03:02.303 [312/854] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:02.303 [313/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:02.303 [314/854] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:02.303 [315/854] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.303 [316/854] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:02.564 [317/854] Linking static target lib/librte_dmadev.a 00:03:02.564 [318/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:02.564 [319/854] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.564 [320/854] Linking static target lib/librte_ip_frag.a 00:03:02.564 [321/854] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:02.564 [322/854] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.564 [323/854] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:02.564 [324/854] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.564 [325/854] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:02.564 [326/854] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.564 [327/854] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.564 [328/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:02.834 [329/854] Generating lib/rte_port_mingw with a custom command 00:03:02.834 [330/854] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.834 [331/854] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:02.834 [332/854] Linking static target lib/librte_regexdev.a 00:03:02.834 [333/854] Generating lib/rte_port_def with a custom command 00:03:02.834 [334/854] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.834 [335/854] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.834 [336/854] Generating lib/rte_pdump_mingw with a custom command 00:03:02.834 [337/854] Linking static target lib/librte_power.a 00:03:02.834 [338/854] Generating lib/rte_pdump_def with a custom command 00:03:02.834 [339/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:02.834 [340/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:02.834 [341/854] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.834 [342/854] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.834 [343/854] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:02.834 [344/854] Linking static target lib/librte_reorder.a 00:03:02.834 [345/854] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:02.834 [346/854] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:02.834 [347/854] Linking static target lib/librte_pcapng.a 00:03:02.834 [348/854] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.095 [349/854] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:03.095 [350/854] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:03.095 [351/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:03.095 [352/854] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:03.095 [353/854] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:03.095 [354/854] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:03.095 [355/854] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.095 [356/854] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.095 [357/854] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:03.095 [358/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:03.095 [359/854] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.095 [360/854] Linking static target lib/librte_security.a 00:03:03.095 [361/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:03.095 [362/854] Generating lib/rte_table_def with a custom command 00:03:03.095 [363/854] Generating lib/rte_table_mingw with a custom command 00:03:03.370 [364/854] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:03.370 [365/854] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:03.370 [366/854] Linking static target lib/librte_efd.a 00:03:03.370 [367/854] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.370 [368/854] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:03.370 [369/854] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:03.370 [370/854] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.370 [371/854] Generating lib/rte_pipeline_mingw with a custom command 00:03:03.370 [372/854] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:03.370 [373/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:03.370 [374/854] Linking static target lib/librte_lpm.a 00:03:03.370 [375/854] Generating lib/rte_pipeline_def with a custom command 00:03:03.370 [376/854] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.370 [377/854] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:03.370 [378/854] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.370 [379/854] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:03.370 [380/854] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:03.370 [381/854] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.370 [382/854] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:03.370 [383/854] Linking static target lib/librte_rib.a 00:03:03.370 [384/854] Generating lib/rte_graph_def with a custom command 00:03:03.370 [385/854] Generating lib/rte_graph_mingw with a custom command 00:03:03.370 [386/854] Linking static target lib/librte_bbdev.a 00:03:03.632 [387/854] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.632 [388/854] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.632 [389/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.632 [390/854] Linking static target lib/librte_ethdev.a 00:03:03.632 [391/854] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:03.632 [392/854] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:03.632 [393/854] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.632 [394/854] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:03.632 [395/854] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:03.632 [396/854] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:03.897 [397/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:03.897 [398/854] Generating lib/rte_node_def with a custom command 00:03:03.897 [399/854] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:03.897 [400/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:03.897 [401/854] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.897 [402/854] Generating lib/rte_node_mingw with a custom command 00:03:03.897 [403/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:03.897 [404/854] Generating drivers/rte_bus_auxiliary_def with a custom command 00:03:03.897 [405/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:03.897 [406/854] Generating drivers/rte_bus_auxiliary_mingw with a custom command 00:03:03.897 [407/854] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:03.897 [408/854] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.897 [409/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:03.897 [410/854] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:03:03.897 [411/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:03.897 [412/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:03.897 [413/854] Generating drivers/rte_bus_pci_def with a custom command 00:03:03.897 [414/854] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:03.897 [415/854] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:03.897 [416/854] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:03.897 [417/854] Linking static target lib/librte_bpf.a 00:03:03.897 [418/854] Generating drivers/rte_bus_vdev_def with a custom command 00:03:03.897 [419/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.897 [420/854] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:03.898 [421/854] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.898 [422/854] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:03.898 [423/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:03.898 [424/854] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:03.898 [425/854] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.156 [426/854] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:03:04.156 [427/854] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.156 [428/854] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:04.156 [429/854] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:04.156 [430/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.156 [431/854] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:04.156 [432/854] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:04.156 [433/854] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:04.156 [434/854] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:04.156 [435/854] Generating drivers/rte_common_mlx5_def with a custom command 00:03:04.156 [436/854] Generating drivers/rte_common_mlx5_mingw with a custom command 00:03:04.156 [437/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.156 [438/854] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:04.156 [439/854] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:04.156 [440/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:03:04.156 [441/854] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.156 [442/854] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:04.156 [443/854] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.156 [444/854] Linking static target lib/librte_fib.a 00:03:04.156 [445/854] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:04.420 [446/854] Linking static target lib/librte_pdump.a 00:03:04.420 [447/854] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:04.420 [448/854] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:04.420 [449/854] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.420 [450/854] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:04.420 [451/854] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.420 [452/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:04.420 [453/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:04.420 [454/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.420 [455/854] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:03:04.420 [456/854] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:04.420 [457/854] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:03:04.420 [458/854] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:04.420 [459/854] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:04.420 [460/854] Linking static target lib/librte_graph.a 00:03:04.420 [461/854] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.686 [462/854] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.686 [463/854] Linking static target lib/librte_cryptodev.a 00:03:04.686 [464/854] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.686 [465/854] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:04.686 [466/854] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:04.686 [467/854] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.686 [468/854] Linking static target lib/librte_member.a 00:03:04.686 [469/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:04.686 [470/854] Generating drivers/rte_common_qat_mingw with a custom command 00:03:04.686 [471/854] Generating drivers/rte_common_qat_def with a custom command 00:03:04.686 [472/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:03:04.686 [473/854] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.686 [474/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:03:04.686 [475/854] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:04.686 [476/854] Generating drivers/rte_mempool_ring_def with a custom command 00:03:04.686 [477/854] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.686 [478/854] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.686 [479/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.686 [480/854] Linking static target drivers/librte_bus_vdev.a 00:03:04.686 [481/854] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.949 [482/854] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.949 [483/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:03:04.949 [484/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:04.949 [485/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:04.949 [486/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:03:04.949 [487/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:03:04.949 [488/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:03:04.949 [489/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:03:04.949 [490/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:03:04.949 [491/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:03:04.949 [492/854] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:03:04.949 [493/854] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:04.949 [494/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:03:04.949 [495/854] Linking static target lib/librte_sched.a 00:03:04.949 [496/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:03:04.949 [497/854] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:03:04.949 [498/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:03:04.949 [499/854] Compiling C object drivers/librte_bus_auxiliary.so.23.0.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:03:04.949 [500/854] Linking static target drivers/librte_bus_auxiliary.a 00:03:04.949 [501/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:03:04.949 [502/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:03:05.209 [503/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:03:05.209 [504/854] Generating drivers/rte_net_i40e_def with a custom command 00:03:05.209 [505/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:03:05.209 [506/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:03:05.209 [507/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:05.209 [508/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:05.209 [509/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:03:05.209 [510/854] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:05.209 [511/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:03:05.209 [512/854] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:05.209 [513/854] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:05.209 [514/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:05.209 [515/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:05.209 [516/854] Linking static target lib/librte_table.a 00:03:05.209 [517/854] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.209 [518/854] Generating drivers/rte_crypto_ipsec_mb_def with a custom command 00:03:05.209 [519/854] Generating drivers/rte_crypto_ipsec_mb_mingw with a custom command 00:03:05.209 [520/854] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.474 [521/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.474 [522/854] Generating drivers/rte_crypto_mlx5_def with a custom command 00:03:05.474 [523/854] Generating drivers/rte_crypto_mlx5_mingw with a custom command 00:03:05.474 [524/854] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.474 [525/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:03:05.474 [526/854] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.474 [527/854] Generating drivers/rte_compress_isal_def with a custom command 00:03:05.474 [528/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:03:05.474 [529/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:03:05.474 [530/854] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.474 [531/854] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.474 [532/854] Generating drivers/rte_compress_isal_mingw with a custom command 00:03:05.474 [533/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:05.474 [534/854] Linking static target lib/librte_hash.a 00:03:05.474 [535/854] Generating drivers/rte_compress_mlx5_def with a custom command 00:03:05.474 [536/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:03:05.474 [537/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:05.474 [538/854] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:05.474 [539/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:03:05.474 [540/854] Linking static target lib/librte_ipsec.a 00:03:05.474 [541/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:03:05.474 [542/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:05.474 [543/854] Generating drivers/rte_compress_mlx5_mingw with a custom command 00:03:05.742 [544/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:05.742 [545/854] Linking static target lib/librte_eventdev.a 00:03:05.742 [546/854] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:05.742 [547/854] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:05.742 [548/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:05.742 [549/854] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.742 [550/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:05.742 [551/854] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.742 [552/854] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.742 [553/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:03:05.742 [554/854] Linking static target drivers/librte_bus_pci.a 00:03:06.004 [555/854] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.004 [556/854] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:06.004 [557/854] Linking static target lib/librte_node.a 00:03:06.004 [558/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:03:06.004 [559/854] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:06.005 [560/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:06.005 [561/854] Linking static target lib/librte_port.a 00:03:06.005 [562/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:06.005 [563/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:03:06.267 [564/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:06.267 [565/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:03:06.267 [566/854] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.267 [567/854] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.267 [568/854] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:03:06.267 [569/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:06.267 [570/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:06.531 [571/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:06.531 [572/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:06.531 [573/854] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.531 [574/854] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:03:06.531 [575/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:06.531 [576/854] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.531 [577/854] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:03:06.531 [578/854] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:03:06.531 [579/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:06.531 [580/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:03:06.797 [581/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:03:06.797 [582/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:06.797 [583/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:03:06.797 [584/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:03:06.797 [585/854] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:06.797 [586/854] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.797 [587/854] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.797 [588/854] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:03:06.797 [589/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:06.797 [590/854] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.797 [591/854] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:03:06.797 [592/854] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:07.066 [593/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:03:07.066 [594/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:07.066 [595/854] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:03:07.066 [596/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:03:07.066 [597/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:07.066 [598/854] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:03:07.066 [599/854] Linking static target drivers/librte_crypto_mlx5.a 00:03:07.066 [600/854] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.066 [601/854] Compiling C object drivers/librte_crypto_mlx5.so.23.0.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:03:07.066 [602/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:07.066 [603/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:07.066 [604/854] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.066 [605/854] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:07.342 [606/854] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.342 [607/854] Linking static target drivers/librte_mempool_ring.a 00:03:07.342 [608/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:07.342 [609/854] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.342 [610/854] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:07.342 [611/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:07.342 [612/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:03:07.342 [613/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:03:07.342 [614/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:07.342 [615/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:07.342 [616/854] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:03:07.342 [617/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:07.342 [618/854] Compiling C object drivers/librte_compress_mlx5.so.23.0.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:03:07.342 [619/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:07.342 [620/854] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:03:07.342 [621/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:07.342 [622/854] Linking static target drivers/librte_compress_mlx5.a 00:03:07.605 [623/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:03:07.605 [624/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:07.605 [625/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:07.605 [626/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:07.605 [627/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:07.605 [628/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:07.605 [629/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:07.605 [630/854] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:07.605 [631/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:07.605 [632/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:07.605 [633/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:07.605 [634/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:07.867 [635/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:07.867 [636/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:07.867 [637/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:07.867 [638/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:07.867 [639/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:07.867 [640/854] Linking static target lib/librte_acl.a 00:03:07.867 [641/854] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:03:07.867 [642/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:07.867 [643/854] Linking static target drivers/libtmp_rte_compress_isal.a 00:03:07.867 [644/854] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:07.867 [645/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:07.867 [646/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:03:07.867 [647/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:07.867 [648/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:07.867 [649/854] Linking static target drivers/libtmp_rte_common_mlx5.a 00:03:07.867 [650/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:08.126 [651/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:08.126 [652/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:08.126 [653/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:03:08.126 [654/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:08.126 [655/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:08.126 [656/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:08.126 [657/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:03:08.126 [658/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:08.126 [659/854] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:03:08.126 [660/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:08.126 [661/854] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:03:08.126 [662/854] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.126 [663/854] Compiling C object drivers/librte_compress_isal.so.23.0.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:03:08.126 [664/854] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:03:08.126 [665/854] Linking static target drivers/librte_compress_isal.a 00:03:08.385 [666/854] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:03:08.385 [667/854] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:03:08.385 [668/854] Compiling C object drivers/librte_common_mlx5.so.23.0.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:03:08.385 [669/854] Linking static target drivers/librte_common_mlx5.a 00:03:08.385 [670/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:03:08.385 [671/854] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:08.385 [672/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:08.385 [673/854] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:03:08.645 [674/854] Compiling C object drivers/librte_crypto_ipsec_mb.so.23.0.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:03:08.645 [675/854] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:03:08.645 [676/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:08.645 [677/854] Linking static target drivers/librte_crypto_ipsec_mb.a 00:03:08.645 [678/854] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:08.645 [679/854] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:08.645 [680/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:08.645 [681/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:03:08.645 [682/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:08.645 [683/854] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:08.645 [684/854] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.645 [685/854] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:08.645 [686/854] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:08.645 [687/854] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:08.645 [688/854] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:08.645 [689/854] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:08.645 [690/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:08.645 [691/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:08.645 [692/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:08.645 [693/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:08.645 [694/854] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.646 [695/854] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:08.646 [696/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:08.646 [697/854] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:08.905 [698/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:08.905 [699/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:08.905 [700/854] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:08.905 [701/854] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:08.905 [702/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:08.905 [703/854] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:08.905 [704/854] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:08.905 [705/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:08.905 [706/854] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:08.905 [707/854] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:09.163 [708/854] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.163 [709/854] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:09.422 [710/854] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:09.422 [711/854] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:09.724 [712/854] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:09.724 [713/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:09.724 [714/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:10.006 [715/854] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:10.270 [716/854] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:10.270 [717/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:10.270 [718/854] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:10.836 [719/854] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:11.402 [720/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:11.402 [721/854] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:11.402 [722/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:11.660 [723/854] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:11.660 [724/854] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:11.918 [725/854] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:11.918 [726/854] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:11.918 [727/854] Linking static target drivers/librte_net_i40e.a 00:03:12.176 [728/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:03:12.176 [729/854] Linking static target drivers/libtmp_rte_common_qat.a 00:03:12.435 [730/854] Generating drivers/rte_common_qat.pmd.c with a custom command 00:03:12.435 [731/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:12.435 [732/854] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:03:12.692 [733/854] Compiling C object drivers/librte_common_qat.so.23.0.p/meson-generated_.._rte_common_qat.pmd.c.o 00:03:12.692 [734/854] Linking static target drivers/librte_common_qat.a 00:03:12.951 [735/854] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.951 [736/854] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.951 [737/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:14.326 [738/854] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.860 [739/854] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.860 [740/854] Linking target lib/librte_eal.so.23.0 00:03:16.860 [741/854] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:16.860 [742/854] Linking target lib/librte_meter.so.23.0 00:03:16.860 [743/854] Linking target lib/librte_pci.so.23.0 00:03:16.860 [744/854] Linking target lib/librte_timer.so.23.0 00:03:16.860 [745/854] Linking target lib/librte_rawdev.so.23.0 00:03:16.860 [746/854] Linking target lib/librte_jobstats.so.23.0 00:03:16.860 [747/854] Linking target lib/librte_ring.so.23.0 00:03:16.860 [748/854] Linking target lib/librte_stack.so.23.0 00:03:16.860 [749/854] Linking target lib/librte_cfgfile.so.23.0 00:03:17.120 [750/854] Linking target drivers/librte_bus_auxiliary.so.23.0 00:03:17.120 [751/854] Linking target lib/librte_dmadev.so.23.0 00:03:17.120 [752/854] Linking target lib/librte_graph.so.23.0 00:03:17.120 [753/854] Linking target lib/librte_acl.so.23.0 00:03:17.120 [754/854] Linking target drivers/librte_bus_vdev.so.23.0 00:03:17.120 [755/854] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:17.120 [756/854] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:17.120 [757/854] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:17.120 [758/854] Generating symbol file drivers/librte_bus_auxiliary.so.23.0.p/librte_bus_auxiliary.so.23.0.symbols 00:03:17.120 [759/854] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:17.120 [760/854] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:17.120 [761/854] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:17.120 [762/854] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:17.120 [763/854] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:17.120 [764/854] Linking target lib/librte_rcu.so.23.0 00:03:17.120 [765/854] Linking target lib/librte_mempool.so.23.0 00:03:17.120 [766/854] Linking target drivers/librte_bus_pci.so.23.0 00:03:17.380 [767/854] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:17.380 [768/854] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:17.380 [769/854] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:17.380 [770/854] Linking target lib/librte_rib.so.23.0 00:03:17.380 [771/854] Linking target drivers/librte_mempool_ring.so.23.0 00:03:17.380 [772/854] Linking target lib/librte_mbuf.so.23.0 00:03:17.639 [773/854] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:17.639 [774/854] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:17.639 [775/854] Linking target lib/librte_fib.so.23.0 00:03:17.639 [776/854] Linking target lib/librte_bbdev.so.23.0 00:03:17.639 [777/854] Linking target lib/librte_net.so.23.0 00:03:17.639 [778/854] Linking target lib/librte_gpudev.so.23.0 00:03:17.639 [779/854] Linking target lib/librte_compressdev.so.23.0 00:03:17.639 [780/854] Linking target lib/librte_distributor.so.23.0 00:03:17.639 [781/854] Linking target lib/librte_reorder.so.23.0 00:03:17.639 [782/854] Linking target lib/librte_regexdev.so.23.0 00:03:17.639 [783/854] Linking target lib/librte_cryptodev.so.23.0 00:03:17.639 [784/854] Linking target lib/librte_sched.so.23.0 00:03:17.898 [785/854] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:17.898 [786/854] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:17.898 [787/854] Generating symbol file lib/librte_compressdev.so.23.0.p/librte_compressdev.so.23.0.symbols 00:03:17.898 [788/854] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:17.898 [789/854] Linking target drivers/librte_compress_isal.so.23.0 00:03:17.898 [790/854] Linking target lib/librte_hash.so.23.0 00:03:17.898 [791/854] Linking target lib/librte_security.so.23.0 00:03:17.898 [792/854] Linking target lib/librte_cmdline.so.23.0 00:03:17.898 [793/854] Linking target lib/librte_ethdev.so.23.0 00:03:18.157 [794/854] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:18.157 [795/854] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:18.157 [796/854] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:18.157 [797/854] Linking target lib/librte_efd.so.23.0 00:03:18.157 [798/854] Linking target lib/librte_member.so.23.0 00:03:18.157 [799/854] Linking target drivers/librte_common_mlx5.so.23.0 00:03:18.157 [800/854] Linking target lib/librte_ipsec.so.23.0 00:03:18.157 [801/854] Linking target lib/librte_lpm.so.23.0 00:03:18.157 [802/854] Linking target drivers/librte_crypto_ipsec_mb.so.23.0 00:03:18.157 [803/854] Linking target lib/librte_metrics.so.23.0 00:03:18.157 [804/854] Linking target lib/librte_gro.so.23.0 00:03:18.157 [805/854] Linking target lib/librte_pcapng.so.23.0 00:03:18.157 [806/854] Linking target lib/librte_gso.so.23.0 00:03:18.157 [807/854] Linking target lib/librte_bpf.so.23.0 00:03:18.157 [808/854] Linking target lib/librte_ip_frag.so.23.0 00:03:18.157 [809/854] Linking target lib/librte_power.so.23.0 00:03:18.157 [810/854] Linking target lib/librte_eventdev.so.23.0 00:03:18.157 [811/854] Linking target drivers/librte_common_qat.so.23.0 00:03:18.157 [812/854] Linking target drivers/librte_net_i40e.so.23.0 00:03:18.416 [813/854] Generating symbol file drivers/librte_common_mlx5.so.23.0.p/librte_common_mlx5.so.23.0.symbols 00:03:18.416 [814/854] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:18.416 [815/854] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:18.416 [816/854] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:18.416 [817/854] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:18.416 [818/854] Linking target drivers/librte_compress_mlx5.so.23.0 00:03:18.416 [819/854] Linking target drivers/librte_crypto_mlx5.so.23.0 00:03:18.416 [820/854] Linking target lib/librte_bitratestats.so.23.0 00:03:18.416 [821/854] Linking target lib/librte_latencystats.so.23.0 00:03:18.416 [822/854] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:18.416 [823/854] Linking target lib/librte_port.so.23.0 00:03:18.416 [824/854] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:18.416 [825/854] Linking target lib/librte_pdump.so.23.0 00:03:18.416 [826/854] Linking target lib/librte_node.so.23.0 00:03:18.676 [827/854] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:18.676 [828/854] Linking target lib/librte_table.so.23.0 00:03:18.935 [829/854] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:20.313 [830/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:20.313 [831/854] Linking static target lib/librte_pipeline.a 00:03:21.252 [832/854] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.252 [833/854] Linking static target lib/librte_vhost.a 00:03:21.820 [834/854] Linking target app/dpdk-test-acl 00:03:21.820 [835/854] Linking target app/dpdk-proc-info 00:03:21.820 [836/854] Linking target app/dpdk-testpmd 00:03:21.820 [837/854] Linking target app/dpdk-pdump 00:03:21.820 [838/854] Linking target app/dpdk-test-cmdline 00:03:21.820 [839/854] Linking target app/dpdk-test-gpudev 00:03:21.820 [840/854] Linking target app/dpdk-test-fib 00:03:21.820 [841/854] Linking target app/dpdk-test-flow-perf 00:03:21.820 [842/854] Linking target app/dpdk-test-compress-perf 00:03:21.820 [843/854] Linking target app/dpdk-test-pipeline 00:03:21.820 [844/854] Linking target app/dpdk-test-sad 00:03:21.820 [845/854] Linking target app/dpdk-test-regex 00:03:21.820 [846/854] Linking target app/dpdk-test-crypto-perf 00:03:21.820 [847/854] Linking target app/dpdk-test-security-perf 00:03:21.820 [848/854] Linking target app/dpdk-test-bbdev 00:03:21.820 [849/854] Linking target app/dpdk-test-eventdev 00:03:22.758 [850/854] Linking target app/dpdk-dumpcap 00:03:23.697 [851/854] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.697 [852/854] Linking target lib/librte_vhost.so.23.0 00:03:25.603 [853/854] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.603 [854/854] Linking target lib/librte_pipeline.so.23.0 00:03:25.603 02:09:15 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:25.603 02:09:15 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:25.603 02:09:15 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp -j72 install 00:03:25.603 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp' 00:03:25.603 [0/1] Installing files. 00:03:26.177 Installing subdir /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:26.178 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.179 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:26.442 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.443 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.444 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.445 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:26.446 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:26.446 Installing lib/librte_kvargs.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_telemetry.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_eal.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_ring.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_rcu.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_mempool.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_mbuf.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_net.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_meter.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_ethdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_pci.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_cmdline.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_metrics.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_hash.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.446 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_timer.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_acl.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_bbdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_bpf.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_compressdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_distributor.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_efd.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_eventdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_gpudev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_gro.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_gso.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_jobstats.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_latencystats.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_lpm.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_member.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_pcapng.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_power.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_rawdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_regexdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_dmadev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_rib.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_reorder.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_sched.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_security.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_stack.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_vhost.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_ipsec.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_fib.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_port.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_pdump.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_table.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_pipeline.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_graph.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_node.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing drivers/librte_bus_auxiliary.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing drivers/librte_bus_auxiliary.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:26.707 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:26.707 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:26.707 Installing drivers/librte_common_mlx5.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:26.707 Installing drivers/librte_common_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing drivers/librte_common_qat.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.280 Installing drivers/librte_common_qat.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.280 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.280 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing drivers/librte_crypto_ipsec_mb.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.280 Installing drivers/librte_crypto_ipsec_mb.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing drivers/librte_crypto_mlx5.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.280 Installing drivers/librte_crypto_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing drivers/librte_compress_isal.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.280 Installing drivers/librte_compress_isal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing drivers/librte_compress_mlx5.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.280 Installing drivers/librte_compress_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.280 Installing app/dpdk-dumpcap to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-pdump to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-proc-info to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-acl to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-fib to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-testpmd to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-regex to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-sad to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.280 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.281 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.282 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.283 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/pkgconfig 00:03:27.284 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/pkgconfig 00:03:27.284 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:27.284 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:27.284 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:27.284 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:27.284 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:27.284 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:27.284 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:27.284 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:27.284 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:27.284 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:27.284 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:27.284 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:27.284 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:27.284 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:27.284 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:27.284 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_net.so 00:03:27.284 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:27.284 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:27.284 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:27.284 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:27.284 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:27.284 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:27.284 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:27.284 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:27.284 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:27.284 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:27.284 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:27.284 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:27.284 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:27.284 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:27.284 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:27.284 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:27.284 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:27.284 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:27.284 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:27.284 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:27.284 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:27.284 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:27.284 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:27.284 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:27.284 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:27.284 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:27.284 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:27.284 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:27.284 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:27.284 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:27.284 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:27.284 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:27.284 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:27.284 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:27.284 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:27.284 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:27.284 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:27.284 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:27.284 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:27.284 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:27.284 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:27.284 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:27.284 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:27.284 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:27.284 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:27.284 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:27.284 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:27.284 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:27.284 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:27.284 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_member.so 00:03:27.284 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:27.284 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:27.284 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:27.285 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_power.so 00:03:27.285 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:27.285 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:27.285 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:27.285 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:27.285 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:27.285 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:27.285 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:27.285 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:27.285 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:27.285 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:27.285 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:27.285 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:27.285 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:27.285 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_security.so 00:03:27.285 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:27.285 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:27.285 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:27.285 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:27.285 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:27.285 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:27.285 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:27.285 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:27.285 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:27.285 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_port.so 00:03:27.285 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:27.285 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:27.285 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:27.285 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_table.so 00:03:27.285 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:27.285 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:27.285 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:27.285 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:27.285 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:27.285 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_node.so 00:03:27.285 Installing symlink pointing to librte_bus_auxiliary.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so.23 00:03:27.285 Installing symlink pointing to librte_bus_auxiliary.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so 00:03:27.285 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:27.285 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:27.285 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:27.285 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:27.285 Installing symlink pointing to librte_common_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so.23 00:03:27.285 Installing symlink pointing to librte_common_mlx5.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so 00:03:27.285 Installing symlink pointing to librte_common_qat.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so.23 00:03:27.285 Installing symlink pointing to librte_common_qat.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so 00:03:27.285 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:27.285 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:27.285 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:27.285 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:27.285 './librte_bus_auxiliary.so' -> 'dpdk/pmds-23.0/librte_bus_auxiliary.so' 00:03:27.285 './librte_bus_auxiliary.so.23' -> 'dpdk/pmds-23.0/librte_bus_auxiliary.so.23' 00:03:27.285 './librte_bus_auxiliary.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_auxiliary.so.23.0' 00:03:27.285 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:27.285 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:27.285 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:27.285 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:27.285 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:27.285 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:27.285 './librte_common_mlx5.so' -> 'dpdk/pmds-23.0/librte_common_mlx5.so' 00:03:27.285 './librte_common_mlx5.so.23' -> 'dpdk/pmds-23.0/librte_common_mlx5.so.23' 00:03:27.285 './librte_common_mlx5.so.23.0' -> 'dpdk/pmds-23.0/librte_common_mlx5.so.23.0' 00:03:27.285 './librte_common_qat.so' -> 'dpdk/pmds-23.0/librte_common_qat.so' 00:03:27.285 './librte_common_qat.so.23' -> 'dpdk/pmds-23.0/librte_common_qat.so.23' 00:03:27.285 './librte_common_qat.so.23.0' -> 'dpdk/pmds-23.0/librte_common_qat.so.23.0' 00:03:27.285 './librte_compress_isal.so' -> 'dpdk/pmds-23.0/librte_compress_isal.so' 00:03:27.285 './librte_compress_isal.so.23' -> 'dpdk/pmds-23.0/librte_compress_isal.so.23' 00:03:27.285 './librte_compress_isal.so.23.0' -> 'dpdk/pmds-23.0/librte_compress_isal.so.23.0' 00:03:27.285 './librte_compress_mlx5.so' -> 'dpdk/pmds-23.0/librte_compress_mlx5.so' 00:03:27.285 './librte_compress_mlx5.so.23' -> 'dpdk/pmds-23.0/librte_compress_mlx5.so.23' 00:03:27.285 './librte_compress_mlx5.so.23.0' -> 'dpdk/pmds-23.0/librte_compress_mlx5.so.23.0' 00:03:27.285 './librte_crypto_ipsec_mb.so' -> 'dpdk/pmds-23.0/librte_crypto_ipsec_mb.so' 00:03:27.285 './librte_crypto_ipsec_mb.so.23' -> 'dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23' 00:03:27.285 './librte_crypto_ipsec_mb.so.23.0' -> 'dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23.0' 00:03:27.285 './librte_crypto_mlx5.so' -> 'dpdk/pmds-23.0/librte_crypto_mlx5.so' 00:03:27.285 './librte_crypto_mlx5.so.23' -> 'dpdk/pmds-23.0/librte_crypto_mlx5.so.23' 00:03:27.285 './librte_crypto_mlx5.so.23.0' -> 'dpdk/pmds-23.0/librte_crypto_mlx5.so.23.0' 00:03:27.285 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:27.285 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:27.285 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:27.285 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:27.285 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:27.285 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:27.285 Installing symlink pointing to librte_crypto_ipsec_mb.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23 00:03:27.285 Installing symlink pointing to librte_crypto_ipsec_mb.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so 00:03:27.285 Installing symlink pointing to librte_crypto_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so.23 00:03:27.285 Installing symlink pointing to librte_crypto_mlx5.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so 00:03:27.285 Installing symlink pointing to librte_compress_isal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so.23 00:03:27.285 Installing symlink pointing to librte_compress_isal.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so 00:03:27.285 Installing symlink pointing to librte_compress_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so.23 00:03:27.285 Installing symlink pointing to librte_compress_mlx5.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so 00:03:27.285 Running custom install script '/bin/sh /var/jenkins/workspace/crypto-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:27.545 02:09:17 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:27.545 02:09:17 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:27.545 00:03:27.545 real 2m54.774s 00:03:27.545 user 20m16.180s 00:03:27.545 sys 3m0.398s 00:03:27.545 02:09:17 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:27.545 02:09:17 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:27.545 ************************************ 00:03:27.545 END TEST build_native_dpdk 00:03:27.545 ************************************ 00:03:27.545 02:09:17 -- common/autotest_common.sh@1142 -- $ return 0 00:03:27.545 02:09:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.545 02:09:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.545 02:09:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.546 02:09:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.546 02:09:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.546 02:09:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.546 02:09:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.546 02:09:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build --with-shared 00:03:27.805 Using /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:27.805 DPDK libraries: /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:27.805 DPDK includes: //var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:27.805 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:03:28.372 Using 'verbs' RDMA provider 00:03:45.183 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:00.068 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:00.068 Creating mk/config.mk...done. 00:04:00.068 Creating mk/cc.flags.mk...done. 00:04:00.068 Type 'make' to build. 00:04:00.068 02:09:50 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:04:00.068 02:09:50 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:00.068 02:09:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:00.068 02:09:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:00.068 ************************************ 00:04:00.068 START TEST make 00:04:00.068 ************************************ 00:04:00.068 02:09:50 make -- common/autotest_common.sh@1123 -- $ make -j72 00:04:00.636 make[1]: Nothing to be done for 'all'. 00:04:18.751 CC lib/ut_mock/mock.o 00:04:18.751 CC lib/log/log.o 00:04:18.751 CC lib/log/log_flags.o 00:04:18.751 CC lib/log/log_deprecated.o 00:04:18.751 CC lib/ut/ut.o 00:04:18.751 LIB libspdk_ut_mock.a 00:04:18.751 SO libspdk_ut_mock.so.6.0 00:04:18.751 LIB libspdk_ut.a 00:04:18.751 LIB libspdk_log.a 00:04:18.751 SO libspdk_ut.so.2.0 00:04:18.751 SO libspdk_log.so.7.0 00:04:18.751 SYMLINK libspdk_ut_mock.so 00:04:18.751 SYMLINK libspdk_log.so 00:04:18.751 SYMLINK libspdk_ut.so 00:04:18.751 CC lib/dma/dma.o 00:04:18.751 CXX lib/trace_parser/trace.o 00:04:18.751 CC lib/ioat/ioat.o 00:04:18.751 CC lib/util/base64.o 00:04:18.751 CC lib/util/bit_array.o 00:04:18.751 CC lib/util/cpuset.o 00:04:18.751 CC lib/util/crc16.o 00:04:18.751 CC lib/util/crc32.o 00:04:18.751 CC lib/util/crc32c.o 00:04:18.751 CC lib/util/crc32_ieee.o 00:04:18.751 CC lib/util/crc64.o 00:04:18.751 CC lib/util/fd.o 00:04:18.751 CC lib/util/dif.o 00:04:18.751 CC lib/util/file.o 00:04:18.751 CC lib/util/iov.o 00:04:18.751 CC lib/util/hexlify.o 00:04:18.751 CC lib/util/math.o 00:04:18.751 CC lib/util/pipe.o 00:04:18.751 CC lib/util/strerror_tls.o 00:04:18.751 CC lib/util/string.o 00:04:18.751 CC lib/util/uuid.o 00:04:18.751 CC lib/util/fd_group.o 00:04:18.751 CC lib/util/xor.o 00:04:18.751 CC lib/util/zipf.o 00:04:18.751 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.751 CC lib/vfio_user/host/vfio_user.o 00:04:18.751 LIB libspdk_dma.a 00:04:18.752 SO libspdk_dma.so.4.0 00:04:18.752 LIB libspdk_ioat.a 00:04:18.752 SYMLINK libspdk_dma.so 00:04:18.752 SO libspdk_ioat.so.7.0 00:04:18.752 SYMLINK libspdk_ioat.so 00:04:18.752 LIB libspdk_vfio_user.a 00:04:18.752 SO libspdk_vfio_user.so.5.0 00:04:18.752 LIB libspdk_util.a 00:04:18.752 SYMLINK libspdk_vfio_user.so 00:04:18.752 SO libspdk_util.so.9.1 00:04:19.010 LIB libspdk_trace_parser.a 00:04:19.010 SO libspdk_trace_parser.so.5.0 00:04:19.010 SYMLINK libspdk_util.so 00:04:19.010 SYMLINK libspdk_trace_parser.so 00:04:19.270 CC lib/vmd/led.o 00:04:19.270 CC lib/vmd/vmd.o 00:04:19.270 CC lib/json/json_parse.o 00:04:19.270 CC lib/json/json_util.o 00:04:19.270 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.270 CC lib/rdma_provider/common.o 00:04:19.270 CC lib/idxd/idxd.o 00:04:19.270 CC lib/json/json_write.o 00:04:19.270 CC lib/reduce/reduce.o 00:04:19.270 CC lib/idxd/idxd_user.o 00:04:19.270 CC lib/idxd/idxd_kernel.o 00:04:19.270 CC lib/rdma_utils/rdma_utils.o 00:04:19.270 CC lib/conf/conf.o 00:04:19.270 CC lib/env_dpdk/env.o 00:04:19.270 CC lib/env_dpdk/memory.o 00:04:19.270 CC lib/env_dpdk/pci.o 00:04:19.270 CC lib/env_dpdk/init.o 00:04:19.270 CC lib/env_dpdk/threads.o 00:04:19.270 CC lib/env_dpdk/pci_ioat.o 00:04:19.270 CC lib/env_dpdk/pci_vmd.o 00:04:19.270 CC lib/env_dpdk/pci_virtio.o 00:04:19.270 CC lib/env_dpdk/pci_event.o 00:04:19.270 CC lib/env_dpdk/pci_idxd.o 00:04:19.270 CC lib/env_dpdk/sigbus_handler.o 00:04:19.270 CC lib/env_dpdk/pci_dpdk.o 00:04:19.270 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.270 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.528 LIB libspdk_rdma_provider.a 00:04:19.528 LIB libspdk_rdma_utils.a 00:04:19.528 SO libspdk_rdma_provider.so.6.0 00:04:19.528 LIB libspdk_conf.a 00:04:19.786 SO libspdk_rdma_utils.so.1.0 00:04:19.786 SO libspdk_conf.so.6.0 00:04:19.786 LIB libspdk_json.a 00:04:19.786 SYMLINK libspdk_rdma_provider.so 00:04:19.786 SYMLINK libspdk_rdma_utils.so 00:04:19.786 SO libspdk_json.so.6.0 00:04:19.786 SYMLINK libspdk_conf.so 00:04:19.786 SYMLINK libspdk_json.so 00:04:20.044 LIB libspdk_idxd.a 00:04:20.044 SO libspdk_idxd.so.12.0 00:04:20.044 LIB libspdk_vmd.a 00:04:20.044 SYMLINK libspdk_idxd.so 00:04:20.044 SO libspdk_vmd.so.6.0 00:04:20.303 SYMLINK libspdk_vmd.so 00:04:20.303 CC lib/jsonrpc/jsonrpc_server.o 00:04:20.303 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:20.303 CC lib/jsonrpc/jsonrpc_client.o 00:04:20.303 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:20.560 LIB libspdk_jsonrpc.a 00:04:20.818 SO libspdk_jsonrpc.so.6.0 00:04:20.818 LIB libspdk_reduce.a 00:04:20.818 SYMLINK libspdk_jsonrpc.so 00:04:20.818 LIB libspdk_env_dpdk.a 00:04:20.818 SO libspdk_reduce.so.6.0 00:04:20.818 SO libspdk_env_dpdk.so.14.1 00:04:21.076 SYMLINK libspdk_reduce.so 00:04:21.076 SYMLINK libspdk_env_dpdk.so 00:04:21.076 CC lib/rpc/rpc.o 00:04:21.334 LIB libspdk_rpc.a 00:04:21.592 SO libspdk_rpc.so.6.0 00:04:21.592 SYMLINK libspdk_rpc.so 00:04:21.851 CC lib/keyring/keyring.o 00:04:21.851 CC lib/trace/trace.o 00:04:21.851 CC lib/keyring/keyring_rpc.o 00:04:21.851 CC lib/trace/trace_flags.o 00:04:21.851 CC lib/trace/trace_rpc.o 00:04:21.851 CC lib/notify/notify.o 00:04:21.851 CC lib/notify/notify_rpc.o 00:04:22.110 LIB libspdk_keyring.a 00:04:22.110 LIB libspdk_trace.a 00:04:22.110 SO libspdk_keyring.so.1.0 00:04:22.368 SO libspdk_trace.so.10.0 00:04:22.368 LIB libspdk_notify.a 00:04:22.368 SYMLINK libspdk_keyring.so 00:04:22.368 SO libspdk_notify.so.6.0 00:04:22.368 SYMLINK libspdk_trace.so 00:04:22.368 SYMLINK libspdk_notify.so 00:04:22.627 CC lib/sock/sock.o 00:04:22.627 CC lib/sock/sock_rpc.o 00:04:22.627 CC lib/thread/thread.o 00:04:22.627 CC lib/thread/iobuf.o 00:04:23.193 LIB libspdk_sock.a 00:04:23.193 SO libspdk_sock.so.10.0 00:04:23.193 SYMLINK libspdk_sock.so 00:04:23.758 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:23.758 CC lib/nvme/nvme_ctrlr.o 00:04:23.758 CC lib/nvme/nvme_ns_cmd.o 00:04:23.758 CC lib/nvme/nvme_fabric.o 00:04:23.758 CC lib/nvme/nvme_ns.o 00:04:23.758 CC lib/nvme/nvme_pcie_common.o 00:04:23.758 CC lib/nvme/nvme_qpair.o 00:04:23.758 CC lib/nvme/nvme_pcie.o 00:04:23.758 CC lib/nvme/nvme.o 00:04:23.759 CC lib/nvme/nvme_quirks.o 00:04:23.759 CC lib/nvme/nvme_transport.o 00:04:23.759 CC lib/nvme/nvme_discovery.o 00:04:23.759 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.759 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.759 CC lib/nvme/nvme_tcp.o 00:04:23.759 CC lib/nvme/nvme_opal.o 00:04:23.759 CC lib/nvme/nvme_io_msg.o 00:04:23.759 CC lib/nvme/nvme_poll_group.o 00:04:23.759 CC lib/nvme/nvme_zns.o 00:04:23.759 CC lib/nvme/nvme_stubs.o 00:04:23.759 CC lib/nvme/nvme_cuse.o 00:04:23.759 CC lib/nvme/nvme_auth.o 00:04:23.759 CC lib/nvme/nvme_rdma.o 00:04:24.327 LIB libspdk_thread.a 00:04:24.327 SO libspdk_thread.so.10.1 00:04:24.327 SYMLINK libspdk_thread.so 00:04:24.586 CC lib/accel/accel.o 00:04:24.586 CC lib/init/json_config.o 00:04:24.586 CC lib/accel/accel_sw.o 00:04:24.586 CC lib/accel/accel_rpc.o 00:04:24.586 CC lib/init/subsystem.o 00:04:24.586 CC lib/init/rpc.o 00:04:24.586 CC lib/init/subsystem_rpc.o 00:04:24.844 CC lib/virtio/virtio.o 00:04:24.844 CC lib/virtio/virtio_vhost_user.o 00:04:24.844 CC lib/virtio/virtio_vfio_user.o 00:04:24.844 CC lib/blob/blobstore.o 00:04:24.844 CC lib/virtio/virtio_pci.o 00:04:24.844 CC lib/blob/request.o 00:04:24.844 CC lib/blob/zeroes.o 00:04:24.844 CC lib/blob/blob_bs_dev.o 00:04:24.844 LIB libspdk_init.a 00:04:25.103 SO libspdk_init.so.5.0 00:04:25.103 SYMLINK libspdk_init.so 00:04:25.103 LIB libspdk_virtio.a 00:04:25.103 SO libspdk_virtio.so.7.0 00:04:25.103 SYMLINK libspdk_virtio.so 00:04:25.362 CC lib/event/reactor.o 00:04:25.362 CC lib/event/app.o 00:04:25.363 CC lib/event/log_rpc.o 00:04:25.363 CC lib/event/scheduler_static.o 00:04:25.363 CC lib/event/app_rpc.o 00:04:25.621 LIB libspdk_accel.a 00:04:25.880 SO libspdk_accel.so.15.1 00:04:25.880 LIB libspdk_nvme.a 00:04:25.880 LIB libspdk_event.a 00:04:25.880 SYMLINK libspdk_accel.so 00:04:25.880 SO libspdk_event.so.14.0 00:04:25.880 SO libspdk_nvme.so.13.1 00:04:26.139 SYMLINK libspdk_event.so 00:04:26.139 CC lib/bdev/bdev.o 00:04:26.139 CC lib/bdev/bdev_rpc.o 00:04:26.139 CC lib/bdev/bdev_zone.o 00:04:26.139 CC lib/bdev/part.o 00:04:26.139 CC lib/bdev/scsi_nvme.o 00:04:26.398 SYMLINK libspdk_nvme.so 00:04:27.776 LIB libspdk_blob.a 00:04:27.776 SO libspdk_blob.so.11.0 00:04:28.035 SYMLINK libspdk_blob.so 00:04:28.294 CC lib/blobfs/blobfs.o 00:04:28.294 CC lib/blobfs/tree.o 00:04:28.294 CC lib/lvol/lvol.o 00:04:28.862 LIB libspdk_bdev.a 00:04:28.862 SO libspdk_bdev.so.15.1 00:04:29.121 SYMLINK libspdk_bdev.so 00:04:29.121 LIB libspdk_blobfs.a 00:04:29.121 SO libspdk_blobfs.so.10.0 00:04:29.121 LIB libspdk_lvol.a 00:04:29.121 SYMLINK libspdk_blobfs.so 00:04:29.388 SO libspdk_lvol.so.10.0 00:04:29.388 SYMLINK libspdk_lvol.so 00:04:29.388 CC lib/nvmf/ctrlr.o 00:04:29.388 CC lib/ftl/ftl_core.o 00:04:29.388 CC lib/nvmf/ctrlr_bdev.o 00:04:29.388 CC lib/ftl/ftl_init.o 00:04:29.388 CC lib/nvmf/ctrlr_discovery.o 00:04:29.388 CC lib/ftl/ftl_layout.o 00:04:29.388 CC lib/nvmf/nvmf.o 00:04:29.388 CC lib/nvmf/subsystem.o 00:04:29.388 CC lib/ftl/ftl_debug.o 00:04:29.388 CC lib/nvmf/nvmf_rpc.o 00:04:29.388 CC lib/ftl/ftl_io.o 00:04:29.388 CC lib/ftl/ftl_sb.o 00:04:29.388 CC lib/nvmf/transport.o 00:04:29.388 CC lib/ftl/ftl_l2p.o 00:04:29.388 CC lib/nvmf/tcp.o 00:04:29.388 CC lib/ftl/ftl_l2p_flat.o 00:04:29.388 CC lib/nvmf/mdns_server.o 00:04:29.388 CC lib/nvmf/stubs.o 00:04:29.388 CC lib/ftl/ftl_nv_cache.o 00:04:29.388 CC lib/nvmf/rdma.o 00:04:29.388 CC lib/ftl/ftl_band.o 00:04:29.388 CC lib/ftl/ftl_band_ops.o 00:04:29.388 CC lib/ftl/ftl_writer.o 00:04:29.388 CC lib/nvmf/auth.o 00:04:29.388 CC lib/scsi/dev.o 00:04:29.388 CC lib/nbd/nbd.o 00:04:29.388 CC lib/ftl/ftl_rq.o 00:04:29.388 CC lib/nbd/nbd_rpc.o 00:04:29.388 CC lib/ftl/ftl_reloc.o 00:04:29.388 CC lib/ublk/ublk.o 00:04:29.388 CC lib/scsi/lun.o 00:04:29.388 CC lib/ftl/ftl_p2l.o 00:04:29.388 CC lib/scsi/port.o 00:04:29.388 CC lib/ftl/ftl_l2p_cache.o 00:04:29.388 CC lib/ublk/ublk_rpc.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt.o 00:04:29.388 CC lib/scsi/scsi.o 00:04:29.388 CC lib/scsi/scsi_bdev.o 00:04:29.388 CC lib/scsi/scsi_pr.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:29.388 CC lib/scsi/scsi_rpc.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.388 CC lib/scsi/task.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:29.388 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:29.388 CC lib/ftl/utils/ftl_conf.o 00:04:29.388 CC lib/ftl/utils/ftl_md.o 00:04:29.388 CC lib/ftl/utils/ftl_mempool.o 00:04:29.388 CC lib/ftl/utils/ftl_bitmap.o 00:04:29.388 CC lib/ftl/utils/ftl_property.o 00:04:29.388 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:29.388 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:29.388 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:29.388 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:29.388 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:29.388 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:29.388 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:29.388 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:29.388 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:29.388 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:29.388 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:29.388 CC lib/ftl/base/ftl_base_dev.o 00:04:29.651 CC lib/ftl/base/ftl_base_bdev.o 00:04:29.910 CC lib/ftl/ftl_trace.o 00:04:30.169 LIB libspdk_nbd.a 00:04:30.169 SO libspdk_nbd.so.7.0 00:04:30.169 SYMLINK libspdk_nbd.so 00:04:30.427 LIB libspdk_scsi.a 00:04:30.427 LIB libspdk_ublk.a 00:04:30.427 SO libspdk_scsi.so.9.0 00:04:30.427 SO libspdk_ublk.so.3.0 00:04:30.427 SYMLINK libspdk_scsi.so 00:04:30.427 SYMLINK libspdk_ublk.so 00:04:30.995 CC lib/iscsi/conn.o 00:04:30.995 CC lib/iscsi/iscsi.o 00:04:30.995 CC lib/iscsi/init_grp.o 00:04:30.995 CC lib/iscsi/md5.o 00:04:30.995 CC lib/iscsi/portal_grp.o 00:04:30.995 CC lib/iscsi/param.o 00:04:30.995 CC lib/vhost/vhost.o 00:04:30.995 CC lib/vhost/vhost_rpc.o 00:04:30.995 CC lib/iscsi/tgt_node.o 00:04:30.995 CC lib/iscsi/iscsi_subsystem.o 00:04:30.995 CC lib/vhost/vhost_scsi.o 00:04:30.995 CC lib/iscsi/task.o 00:04:30.995 CC lib/iscsi/iscsi_rpc.o 00:04:30.995 CC lib/vhost/vhost_blk.o 00:04:30.995 CC lib/vhost/rte_vhost_user.o 00:04:30.995 LIB libspdk_ftl.a 00:04:31.254 SO libspdk_ftl.so.9.0 00:04:31.820 SYMLINK libspdk_ftl.so 00:04:31.820 LIB libspdk_nvmf.a 00:04:31.820 SO libspdk_nvmf.so.18.1 00:04:32.135 LIB libspdk_vhost.a 00:04:32.135 SO libspdk_vhost.so.8.0 00:04:32.135 SYMLINK libspdk_nvmf.so 00:04:32.449 SYMLINK libspdk_vhost.so 00:04:32.449 LIB libspdk_iscsi.a 00:04:32.449 SO libspdk_iscsi.so.8.0 00:04:32.449 SYMLINK libspdk_iscsi.so 00:04:33.017 CC module/env_dpdk/env_dpdk_rpc.o 00:04:33.277 CC module/accel/error/accel_error.o 00:04:33.277 CC module/accel/error/accel_error_rpc.o 00:04:33.277 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:33.277 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:33.277 CC module/accel/ioat/accel_ioat.o 00:04:33.277 CC module/sock/posix/posix.o 00:04:33.277 CC module/accel/ioat/accel_ioat_rpc.o 00:04:33.277 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:04:33.277 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:04:33.277 CC module/keyring/linux/keyring_rpc.o 00:04:33.277 CC module/keyring/linux/keyring.o 00:04:33.277 CC module/blob/bdev/blob_bdev.o 00:04:33.277 LIB libspdk_env_dpdk_rpc.a 00:04:33.277 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.277 CC module/keyring/file/keyring.o 00:04:33.277 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:04:33.277 CC module/keyring/file/keyring_rpc.o 00:04:33.277 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:04:33.277 CC module/accel/dsa/accel_dsa.o 00:04:33.277 CC module/accel/dsa/accel_dsa_rpc.o 00:04:33.277 CC module/accel/iaa/accel_iaa.o 00:04:33.277 CC module/accel/iaa/accel_iaa_rpc.o 00:04:33.277 SO libspdk_env_dpdk_rpc.so.6.0 00:04:33.536 SYMLINK libspdk_env_dpdk_rpc.so 00:04:33.536 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.536 LIB libspdk_keyring_file.a 00:04:33.536 LIB libspdk_keyring_linux.a 00:04:33.536 LIB libspdk_accel_error.a 00:04:33.536 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.536 SO libspdk_keyring_file.so.1.0 00:04:33.536 SO libspdk_keyring_linux.so.1.0 00:04:33.536 LIB libspdk_accel_iaa.a 00:04:33.536 SO libspdk_accel_error.so.2.0 00:04:33.536 LIB libspdk_accel_dsa.a 00:04:33.536 LIB libspdk_scheduler_dynamic.a 00:04:33.536 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.536 SO libspdk_accel_iaa.so.3.0 00:04:33.536 SYMLINK libspdk_keyring_file.so 00:04:33.536 SYMLINK libspdk_keyring_linux.so 00:04:33.536 LIB libspdk_blob_bdev.a 00:04:33.536 SO libspdk_scheduler_dynamic.so.4.0 00:04:33.536 SO libspdk_accel_dsa.so.5.0 00:04:33.536 SYMLINK libspdk_accel_error.so 00:04:33.795 SO libspdk_blob_bdev.so.11.0 00:04:33.795 LIB libspdk_scheduler_gscheduler.a 00:04:33.795 SYMLINK libspdk_accel_iaa.so 00:04:33.795 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.795 SYMLINK libspdk_scheduler_dynamic.so 00:04:33.795 SYMLINK libspdk_accel_dsa.so 00:04:33.795 LIB libspdk_accel_ioat.a 00:04:33.795 SYMLINK libspdk_blob_bdev.so 00:04:33.795 SO libspdk_accel_ioat.so.6.0 00:04:33.795 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.795 SYMLINK libspdk_accel_ioat.so 00:04:34.054 LIB libspdk_sock_posix.a 00:04:34.054 SO libspdk_sock_posix.so.6.0 00:04:34.313 SYMLINK libspdk_sock_posix.so 00:04:34.313 CC module/bdev/null/bdev_null.o 00:04:34.313 CC module/bdev/null/bdev_null_rpc.o 00:04:34.313 CC module/bdev/delay/vbdev_delay.o 00:04:34.313 CC module/bdev/gpt/gpt.o 00:04:34.313 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:34.313 CC module/bdev/gpt/vbdev_gpt.o 00:04:34.313 CC module/bdev/lvol/vbdev_lvol.o 00:04:34.313 CC module/bdev/ftl/bdev_ftl.o 00:04:34.313 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:34.313 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.313 CC module/bdev/split/vbdev_split.o 00:04:34.313 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.313 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:34.313 CC module/bdev/compress/vbdev_compress.o 00:04:34.313 CC module/bdev/nvme/bdev_nvme.o 00:04:34.313 CC module/bdev/error/vbdev_error.o 00:04:34.313 CC module/bdev/nvme/nvme_rpc.o 00:04:34.313 CC module/bdev/compress/vbdev_compress_rpc.o 00:04:34.313 CC module/bdev/nvme/vbdev_opal.o 00:04:34.313 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.313 CC module/bdev/nvme/bdev_mdns_client.o 00:04:34.313 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.313 CC module/bdev/error/vbdev_error_rpc.o 00:04:34.313 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.313 CC module/bdev/malloc/bdev_malloc.o 00:04:34.313 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:34.313 CC module/bdev/passthru/vbdev_passthru.o 00:04:34.313 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.313 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:34.313 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:34.313 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.313 CC module/bdev/aio/bdev_aio.o 00:04:34.313 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:34.313 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.313 CC module/blobfs/bdev/blobfs_bdev.o 00:04:34.313 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.313 CC module/bdev/crypto/vbdev_crypto.o 00:04:34.313 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:04:34.313 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.313 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.313 CC module/bdev/raid/bdev_raid.o 00:04:34.313 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.313 CC module/bdev/raid/bdev_raid_rpc.o 00:04:34.313 CC module/bdev/raid/raid1.o 00:04:34.313 CC module/bdev/raid/raid0.o 00:04:34.313 CC module/bdev/raid/concat.o 00:04:34.572 LIB libspdk_accel_dpdk_compressdev.a 00:04:34.572 SO libspdk_accel_dpdk_compressdev.so.3.0 00:04:34.572 LIB libspdk_bdev_null.a 00:04:34.572 SO libspdk_bdev_null.so.6.0 00:04:34.572 LIB libspdk_blobfs_bdev.a 00:04:34.572 SYMLINK libspdk_accel_dpdk_compressdev.so 00:04:34.572 SO libspdk_blobfs_bdev.so.6.0 00:04:34.572 LIB libspdk_bdev_passthru.a 00:04:34.572 LIB libspdk_bdev_ftl.a 00:04:34.830 LIB libspdk_accel_dpdk_cryptodev.a 00:04:34.830 SYMLINK libspdk_bdev_null.so 00:04:34.830 LIB libspdk_bdev_split.a 00:04:34.830 SO libspdk_bdev_passthru.so.6.0 00:04:34.830 LIB libspdk_bdev_compress.a 00:04:34.830 SO libspdk_bdev_ftl.so.6.0 00:04:34.830 SYMLINK libspdk_blobfs_bdev.so 00:04:34.830 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:04:34.830 LIB libspdk_bdev_malloc.a 00:04:34.830 SO libspdk_bdev_split.so.6.0 00:04:34.830 SO libspdk_bdev_compress.so.6.0 00:04:34.830 SO libspdk_bdev_malloc.so.6.0 00:04:34.830 LIB libspdk_bdev_crypto.a 00:04:34.830 SYMLINK libspdk_bdev_passthru.so 00:04:34.830 SYMLINK libspdk_bdev_ftl.so 00:04:34.830 LIB libspdk_bdev_iscsi.a 00:04:34.830 SYMLINK libspdk_bdev_split.so 00:04:34.830 LIB libspdk_bdev_aio.a 00:04:34.830 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:04:34.830 LIB libspdk_bdev_delay.a 00:04:34.830 SO libspdk_bdev_crypto.so.6.0 00:04:34.830 SYMLINK libspdk_bdev_compress.so 00:04:34.830 LIB libspdk_bdev_zone_block.a 00:04:34.830 SO libspdk_bdev_aio.so.6.0 00:04:34.830 SO libspdk_bdev_iscsi.so.6.0 00:04:34.830 SYMLINK libspdk_bdev_malloc.so 00:04:34.830 SO libspdk_bdev_delay.so.6.0 00:04:34.830 SO libspdk_bdev_zone_block.so.6.0 00:04:34.830 LIB libspdk_bdev_gpt.a 00:04:34.830 SYMLINK libspdk_bdev_crypto.so 00:04:34.830 SO libspdk_bdev_gpt.so.6.0 00:04:35.089 SYMLINK libspdk_bdev_iscsi.so 00:04:35.089 SYMLINK libspdk_bdev_aio.so 00:04:35.089 LIB libspdk_bdev_error.a 00:04:35.089 LIB libspdk_bdev_virtio.a 00:04:35.089 SYMLINK libspdk_bdev_zone_block.so 00:04:35.089 SYMLINK libspdk_bdev_delay.so 00:04:35.089 LIB libspdk_bdev_lvol.a 00:04:35.089 SO libspdk_bdev_error.so.6.0 00:04:35.089 SO libspdk_bdev_virtio.so.6.0 00:04:35.089 SYMLINK libspdk_bdev_gpt.so 00:04:35.089 SO libspdk_bdev_lvol.so.6.0 00:04:35.089 SYMLINK libspdk_bdev_error.so 00:04:35.089 SYMLINK libspdk_bdev_virtio.so 00:04:35.089 SYMLINK libspdk_bdev_lvol.so 00:04:35.089 LIB libspdk_bdev_raid.a 00:04:35.089 SO libspdk_bdev_raid.so.6.0 00:04:35.347 SYMLINK libspdk_bdev_raid.so 00:04:36.748 LIB libspdk_bdev_nvme.a 00:04:36.748 SO libspdk_bdev_nvme.so.7.0 00:04:36.748 SYMLINK libspdk_bdev_nvme.so 00:04:37.680 CC module/event/subsystems/iobuf/iobuf.o 00:04:37.680 CC module/event/subsystems/keyring/keyring.o 00:04:37.680 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:37.680 CC module/event/subsystems/sock/sock.o 00:04:37.680 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:37.680 CC module/event/subsystems/vmd/vmd.o 00:04:37.680 CC module/event/subsystems/scheduler/scheduler.o 00:04:37.680 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:37.680 LIB libspdk_event_keyring.a 00:04:37.680 LIB libspdk_event_iobuf.a 00:04:37.680 LIB libspdk_event_scheduler.a 00:04:37.680 LIB libspdk_event_vhost_blk.a 00:04:37.680 LIB libspdk_event_vmd.a 00:04:37.680 LIB libspdk_event_sock.a 00:04:37.680 SO libspdk_event_keyring.so.1.0 00:04:37.680 SO libspdk_event_iobuf.so.3.0 00:04:37.680 SO libspdk_event_scheduler.so.4.0 00:04:37.680 SO libspdk_event_vhost_blk.so.3.0 00:04:37.937 SO libspdk_event_vmd.so.6.0 00:04:37.937 SO libspdk_event_sock.so.5.0 00:04:37.937 SYMLINK libspdk_event_keyring.so 00:04:37.937 SYMLINK libspdk_event_scheduler.so 00:04:37.937 SYMLINK libspdk_event_iobuf.so 00:04:37.937 SYMLINK libspdk_event_vhost_blk.so 00:04:37.937 SYMLINK libspdk_event_sock.so 00:04:37.937 SYMLINK libspdk_event_vmd.so 00:04:38.195 CC module/event/subsystems/accel/accel.o 00:04:38.451 LIB libspdk_event_accel.a 00:04:38.451 SO libspdk_event_accel.so.6.0 00:04:38.451 SYMLINK libspdk_event_accel.so 00:04:39.016 CC module/event/subsystems/bdev/bdev.o 00:04:39.016 LIB libspdk_event_bdev.a 00:04:39.016 SO libspdk_event_bdev.so.6.0 00:04:39.274 SYMLINK libspdk_event_bdev.so 00:04:39.532 CC module/event/subsystems/ublk/ublk.o 00:04:39.532 CC module/event/subsystems/scsi/scsi.o 00:04:39.532 CC module/event/subsystems/nbd/nbd.o 00:04:39.532 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:39.532 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.796 LIB libspdk_event_nbd.a 00:04:39.796 LIB libspdk_event_ublk.a 00:04:39.796 LIB libspdk_event_scsi.a 00:04:39.796 SO libspdk_event_nbd.so.6.0 00:04:39.796 SO libspdk_event_ublk.so.3.0 00:04:39.796 SO libspdk_event_scsi.so.6.0 00:04:39.796 SYMLINK libspdk_event_nbd.so 00:04:39.796 SYMLINK libspdk_event_ublk.so 00:04:40.054 SYMLINK libspdk_event_scsi.so 00:04:40.054 LIB libspdk_event_nvmf.a 00:04:40.054 SO libspdk_event_nvmf.so.6.0 00:04:40.312 SYMLINK libspdk_event_nvmf.so 00:04:40.312 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:40.312 CC module/event/subsystems/iscsi/iscsi.o 00:04:40.570 LIB libspdk_event_vhost_scsi.a 00:04:40.570 SO libspdk_event_vhost_scsi.so.3.0 00:04:40.571 LIB libspdk_event_iscsi.a 00:04:40.571 SO libspdk_event_iscsi.so.6.0 00:04:40.571 SYMLINK libspdk_event_vhost_scsi.so 00:04:40.571 SYMLINK libspdk_event_iscsi.so 00:04:40.828 SO libspdk.so.6.0 00:04:40.828 SYMLINK libspdk.so 00:04:41.398 CXX app/trace/trace.o 00:04:41.398 CC app/spdk_nvme_identify/identify.o 00:04:41.398 CC app/trace_record/trace_record.o 00:04:41.398 CC app/spdk_lspci/spdk_lspci.o 00:04:41.398 CC app/spdk_nvme_discover/discovery_aer.o 00:04:41.398 CC app/spdk_top/spdk_top.o 00:04:41.398 CC app/spdk_nvme_perf/perf.o 00:04:41.398 TEST_HEADER include/spdk/accel.h 00:04:41.398 TEST_HEADER include/spdk/assert.h 00:04:41.398 TEST_HEADER include/spdk/accel_module.h 00:04:41.398 TEST_HEADER include/spdk/barrier.h 00:04:41.398 TEST_HEADER include/spdk/base64.h 00:04:41.398 TEST_HEADER include/spdk/bdev.h 00:04:41.398 CC test/rpc_client/rpc_client_test.o 00:04:41.398 TEST_HEADER include/spdk/bdev_module.h 00:04:41.398 TEST_HEADER include/spdk/bdev_zone.h 00:04:41.398 TEST_HEADER include/spdk/bit_pool.h 00:04:41.398 TEST_HEADER include/spdk/bit_array.h 00:04:41.398 TEST_HEADER include/spdk/blob_bdev.h 00:04:41.398 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:41.398 TEST_HEADER include/spdk/blobfs.h 00:04:41.398 TEST_HEADER include/spdk/blob.h 00:04:41.398 TEST_HEADER include/spdk/conf.h 00:04:41.398 TEST_HEADER include/spdk/cpuset.h 00:04:41.398 TEST_HEADER include/spdk/config.h 00:04:41.398 TEST_HEADER include/spdk/crc16.h 00:04:41.398 TEST_HEADER include/spdk/crc64.h 00:04:41.398 TEST_HEADER include/spdk/dma.h 00:04:41.398 TEST_HEADER include/spdk/crc32.h 00:04:41.398 TEST_HEADER include/spdk/dif.h 00:04:41.398 TEST_HEADER include/spdk/env_dpdk.h 00:04:41.398 TEST_HEADER include/spdk/event.h 00:04:41.398 TEST_HEADER include/spdk/env.h 00:04:41.398 TEST_HEADER include/spdk/endian.h 00:04:41.398 TEST_HEADER include/spdk/fd_group.h 00:04:41.398 TEST_HEADER include/spdk/file.h 00:04:41.398 TEST_HEADER include/spdk/fd.h 00:04:41.398 TEST_HEADER include/spdk/ftl.h 00:04:41.398 TEST_HEADER include/spdk/hexlify.h 00:04:41.398 TEST_HEADER include/spdk/histogram_data.h 00:04:41.398 TEST_HEADER include/spdk/idxd.h 00:04:41.398 TEST_HEADER include/spdk/idxd_spec.h 00:04:41.398 TEST_HEADER include/spdk/gpt_spec.h 00:04:41.398 TEST_HEADER include/spdk/init.h 00:04:41.398 TEST_HEADER include/spdk/ioat.h 00:04:41.398 TEST_HEADER include/spdk/iscsi_spec.h 00:04:41.398 TEST_HEADER include/spdk/jsonrpc.h 00:04:41.398 TEST_HEADER include/spdk/ioat_spec.h 00:04:41.398 TEST_HEADER include/spdk/keyring.h 00:04:41.398 TEST_HEADER include/spdk/keyring_module.h 00:04:41.398 TEST_HEADER include/spdk/json.h 00:04:41.398 CC app/nvmf_tgt/nvmf_main.o 00:04:41.398 TEST_HEADER include/spdk/likely.h 00:04:41.398 TEST_HEADER include/spdk/log.h 00:04:41.398 TEST_HEADER include/spdk/lvol.h 00:04:41.398 TEST_HEADER include/spdk/memory.h 00:04:41.398 TEST_HEADER include/spdk/mmio.h 00:04:41.398 TEST_HEADER include/spdk/nbd.h 00:04:41.398 TEST_HEADER include/spdk/notify.h 00:04:41.398 TEST_HEADER include/spdk/nvme.h 00:04:41.398 TEST_HEADER include/spdk/nvme_intel.h 00:04:41.398 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:41.398 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:41.398 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:41.398 TEST_HEADER include/spdk/nvme_spec.h 00:04:41.398 TEST_HEADER include/spdk/nvme_zns.h 00:04:41.398 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:41.398 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:41.398 TEST_HEADER include/spdk/nvmf.h 00:04:41.398 TEST_HEADER include/spdk/nvmf_spec.h 00:04:41.398 TEST_HEADER include/spdk/nvmf_transport.h 00:04:41.398 TEST_HEADER include/spdk/opal.h 00:04:41.398 TEST_HEADER include/spdk/opal_spec.h 00:04:41.398 TEST_HEADER include/spdk/pci_ids.h 00:04:41.398 TEST_HEADER include/spdk/pipe.h 00:04:41.398 TEST_HEADER include/spdk/reduce.h 00:04:41.398 TEST_HEADER include/spdk/queue.h 00:04:41.398 TEST_HEADER include/spdk/rpc.h 00:04:41.398 TEST_HEADER include/spdk/scheduler.h 00:04:41.398 TEST_HEADER include/spdk/scsi.h 00:04:41.398 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.398 TEST_HEADER include/spdk/sock.h 00:04:41.398 TEST_HEADER include/spdk/stdinc.h 00:04:41.398 CC app/spdk_dd/spdk_dd.o 00:04:41.398 TEST_HEADER include/spdk/string.h 00:04:41.398 TEST_HEADER include/spdk/thread.h 00:04:41.398 TEST_HEADER include/spdk/trace.h 00:04:41.398 TEST_HEADER include/spdk/trace_parser.h 00:04:41.398 TEST_HEADER include/spdk/ublk.h 00:04:41.398 TEST_HEADER include/spdk/util.h 00:04:41.398 TEST_HEADER include/spdk/uuid.h 00:04:41.398 TEST_HEADER include/spdk/version.h 00:04:41.398 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.398 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.398 TEST_HEADER include/spdk/tree.h 00:04:41.398 TEST_HEADER include/spdk/vhost.h 00:04:41.398 TEST_HEADER include/spdk/vmd.h 00:04:41.398 CC app/iscsi_tgt/iscsi_tgt.o 00:04:41.398 TEST_HEADER include/spdk/zipf.h 00:04:41.398 TEST_HEADER include/spdk/xor.h 00:04:41.398 CXX test/cpp_headers/accel.o 00:04:41.398 CXX test/cpp_headers/accel_module.o 00:04:41.398 CXX test/cpp_headers/assert.o 00:04:41.398 CXX test/cpp_headers/barrier.o 00:04:41.398 CXX test/cpp_headers/base64.o 00:04:41.398 CXX test/cpp_headers/bdev_zone.o 00:04:41.399 CXX test/cpp_headers/bdev_module.o 00:04:41.399 CXX test/cpp_headers/bdev.o 00:04:41.399 CXX test/cpp_headers/bit_array.o 00:04:41.399 CXX test/cpp_headers/blob_bdev.o 00:04:41.399 CXX test/cpp_headers/bit_pool.o 00:04:41.399 CXX test/cpp_headers/blobfs_bdev.o 00:04:41.399 CXX test/cpp_headers/blob.o 00:04:41.399 CXX test/cpp_headers/blobfs.o 00:04:41.399 CXX test/cpp_headers/cpuset.o 00:04:41.399 CXX test/cpp_headers/config.o 00:04:41.399 CXX test/cpp_headers/conf.o 00:04:41.399 CXX test/cpp_headers/crc64.o 00:04:41.399 CXX test/cpp_headers/crc16.o 00:04:41.399 CXX test/cpp_headers/crc32.o 00:04:41.399 CXX test/cpp_headers/dif.o 00:04:41.399 CXX test/cpp_headers/endian.o 00:04:41.399 CXX test/cpp_headers/dma.o 00:04:41.399 CXX test/cpp_headers/env_dpdk.o 00:04:41.399 CXX test/cpp_headers/env.o 00:04:41.399 CXX test/cpp_headers/event.o 00:04:41.399 CXX test/cpp_headers/fd_group.o 00:04:41.399 CXX test/cpp_headers/fd.o 00:04:41.399 CXX test/cpp_headers/file.o 00:04:41.399 CXX test/cpp_headers/ftl.o 00:04:41.399 CXX test/cpp_headers/gpt_spec.o 00:04:41.399 CXX test/cpp_headers/hexlify.o 00:04:41.399 CXX test/cpp_headers/idxd.o 00:04:41.399 CXX test/cpp_headers/init.o 00:04:41.399 CXX test/cpp_headers/histogram_data.o 00:04:41.399 CXX test/cpp_headers/ioat_spec.o 00:04:41.399 CXX test/cpp_headers/idxd_spec.o 00:04:41.399 CXX test/cpp_headers/ioat.o 00:04:41.399 CXX test/cpp_headers/iscsi_spec.o 00:04:41.399 CXX test/cpp_headers/json.o 00:04:41.399 CXX test/cpp_headers/jsonrpc.o 00:04:41.399 CXX test/cpp_headers/keyring.o 00:04:41.399 CC app/spdk_tgt/spdk_tgt.o 00:04:41.399 CC test/thread/poller_perf/poller_perf.o 00:04:41.399 CXX test/cpp_headers/keyring_module.o 00:04:41.399 CC examples/util/zipf/zipf.o 00:04:41.399 CC test/app/histogram_perf/histogram_perf.o 00:04:41.399 CC test/env/pci/pci_ut.o 00:04:41.399 CC test/app/stub/stub.o 00:04:41.399 CC test/env/vtophys/vtophys.o 00:04:41.399 CC examples/ioat/verify/verify.o 00:04:41.399 CC test/app/jsoncat/jsoncat.o 00:04:41.399 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:41.399 CC examples/ioat/perf/perf.o 00:04:41.399 CC test/env/memory/memory_ut.o 00:04:41.661 CC app/fio/nvme/fio_plugin.o 00:04:41.661 CC test/app/bdev_svc/bdev_svc.o 00:04:41.661 CC test/dma/test_dma/test_dma.o 00:04:41.661 LINK spdk_lspci 00:04:41.661 CC app/fio/bdev/fio_plugin.o 00:04:41.661 LINK spdk_nvme_discover 00:04:41.661 LINK rpc_client_test 00:04:41.926 LINK spdk_trace_record 00:04:41.926 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:41.926 LINK nvmf_tgt 00:04:41.926 CC test/env/mem_callbacks/mem_callbacks.o 00:04:41.926 CXX test/cpp_headers/likely.o 00:04:41.926 LINK zipf 00:04:41.926 CXX test/cpp_headers/log.o 00:04:41.926 LINK vtophys 00:04:41.926 LINK interrupt_tgt 00:04:41.926 LINK histogram_perf 00:04:41.926 CXX test/cpp_headers/lvol.o 00:04:41.926 CXX test/cpp_headers/memory.o 00:04:41.926 LINK poller_perf 00:04:41.926 CXX test/cpp_headers/mmio.o 00:04:41.926 CXX test/cpp_headers/nbd.o 00:04:41.926 CXX test/cpp_headers/notify.o 00:04:41.926 CXX test/cpp_headers/nvme.o 00:04:41.926 CXX test/cpp_headers/nvme_intel.o 00:04:41.926 CXX test/cpp_headers/nvme_ocssd.o 00:04:41.926 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:41.926 CXX test/cpp_headers/nvme_spec.o 00:04:41.926 CXX test/cpp_headers/nvme_zns.o 00:04:41.926 CXX test/cpp_headers/nvmf_cmd.o 00:04:41.926 LINK iscsi_tgt 00:04:41.926 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:41.926 LINK jsoncat 00:04:41.926 CXX test/cpp_headers/nvmf.o 00:04:41.926 LINK spdk_tgt 00:04:41.926 CXX test/cpp_headers/nvmf_spec.o 00:04:41.926 CXX test/cpp_headers/nvmf_transport.o 00:04:41.926 CXX test/cpp_headers/opal.o 00:04:41.926 LINK env_dpdk_post_init 00:04:41.926 CXX test/cpp_headers/opal_spec.o 00:04:41.926 CXX test/cpp_headers/pci_ids.o 00:04:42.194 LINK stub 00:04:42.194 CXX test/cpp_headers/pipe.o 00:04:42.194 CXX test/cpp_headers/queue.o 00:04:42.194 CXX test/cpp_headers/reduce.o 00:04:42.194 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:42.194 CXX test/cpp_headers/rpc.o 00:04:42.194 LINK bdev_svc 00:04:42.194 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.194 CXX test/cpp_headers/scheduler.o 00:04:42.194 CXX test/cpp_headers/scsi.o 00:04:42.194 CXX test/cpp_headers/scsi_spec.o 00:04:42.194 CXX test/cpp_headers/sock.o 00:04:42.194 LINK verify 00:04:42.194 CXX test/cpp_headers/stdinc.o 00:04:42.194 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:42.194 CXX test/cpp_headers/string.o 00:04:42.194 CXX test/cpp_headers/thread.o 00:04:42.195 CXX test/cpp_headers/trace.o 00:04:42.195 CXX test/cpp_headers/trace_parser.o 00:04:42.195 CXX test/cpp_headers/tree.o 00:04:42.195 CXX test/cpp_headers/ublk.o 00:04:42.195 CXX test/cpp_headers/util.o 00:04:42.195 LINK ioat_perf 00:04:42.195 CXX test/cpp_headers/uuid.o 00:04:42.195 CXX test/cpp_headers/version.o 00:04:42.195 CXX test/cpp_headers/vfio_user_pci.o 00:04:42.195 LINK spdk_trace 00:04:42.195 CXX test/cpp_headers/vfio_user_spec.o 00:04:42.195 CXX test/cpp_headers/vhost.o 00:04:42.195 CXX test/cpp_headers/vmd.o 00:04:42.195 CXX test/cpp_headers/xor.o 00:04:42.457 CXX test/cpp_headers/zipf.o 00:04:42.457 LINK mem_callbacks 00:04:42.457 LINK spdk_dd 00:04:42.457 LINK test_dma 00:04:42.457 LINK pci_ut 00:04:42.715 LINK spdk_nvme_identify 00:04:42.715 LINK nvme_fuzz 00:04:42.715 LINK spdk_bdev 00:04:42.715 CC examples/sock/hello_world/hello_sock.o 00:04:42.715 CC examples/vmd/lsvmd/lsvmd.o 00:04:42.715 CC examples/idxd/perf/perf.o 00:04:42.715 LINK spdk_nvme 00:04:42.715 CC examples/vmd/led/led.o 00:04:42.715 CC test/event/reactor_perf/reactor_perf.o 00:04:42.715 CC test/event/reactor/reactor.o 00:04:42.715 CC test/event/event_perf/event_perf.o 00:04:42.716 CC test/event/app_repeat/app_repeat.o 00:04:42.716 CC test/event/scheduler/scheduler.o 00:04:42.716 CC examples/thread/thread/thread_ex.o 00:04:42.716 CC app/vhost/vhost.o 00:04:42.976 LINK vhost_fuzz 00:04:42.976 LINK lsvmd 00:04:42.976 LINK spdk_top 00:04:42.976 LINK reactor_perf 00:04:42.976 LINK led 00:04:42.976 LINK memory_ut 00:04:42.976 LINK event_perf 00:04:42.976 LINK spdk_nvme_perf 00:04:42.976 LINK hello_sock 00:04:42.976 LINK app_repeat 00:04:42.976 LINK scheduler 00:04:42.976 CC test/nvme/reset/reset.o 00:04:42.976 LINK vhost 00:04:42.976 CC test/nvme/e2edp/nvme_dp.o 00:04:42.976 CC test/nvme/startup/startup.o 00:04:42.976 CC test/nvme/sgl/sgl.o 00:04:42.976 CC test/nvme/err_injection/err_injection.o 00:04:42.976 CC test/nvme/reserve/reserve.o 00:04:42.976 LINK reactor 00:04:42.976 CC test/nvme/fdp/fdp.o 00:04:42.976 CC test/nvme/connect_stress/connect_stress.o 00:04:42.976 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:42.976 CC test/nvme/boot_partition/boot_partition.o 00:04:42.976 CC test/nvme/aer/aer.o 00:04:42.976 LINK idxd_perf 00:04:42.976 CC test/nvme/fused_ordering/fused_ordering.o 00:04:42.976 CC test/nvme/cuse/cuse.o 00:04:42.976 CC test/nvme/overhead/overhead.o 00:04:42.976 CC test/nvme/compliance/nvme_compliance.o 00:04:43.235 LINK thread 00:04:43.235 CC test/nvme/simple_copy/simple_copy.o 00:04:43.235 CC test/blobfs/mkfs/mkfs.o 00:04:43.235 CC test/accel/dif/dif.o 00:04:43.235 CC test/lvol/esnap/esnap.o 00:04:43.235 LINK startup 00:04:43.235 LINK boot_partition 00:04:43.235 LINK connect_stress 00:04:43.235 LINK doorbell_aers 00:04:43.235 LINK err_injection 00:04:43.235 LINK reserve 00:04:43.494 LINK nvme_dp 00:04:43.494 LINK sgl 00:04:43.494 LINK fdp 00:04:43.494 LINK fused_ordering 00:04:43.494 LINK simple_copy 00:04:43.494 LINK aer 00:04:43.494 LINK mkfs 00:04:43.494 LINK reset 00:04:43.494 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:43.494 CC examples/nvme/hotplug/hotplug.o 00:04:43.494 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:43.494 CC examples/nvme/reconnect/reconnect.o 00:04:43.494 CC examples/nvme/hello_world/hello_world.o 00:04:43.494 CC examples/nvme/abort/abort.o 00:04:43.494 CC examples/nvme/arbitration/arbitration.o 00:04:43.494 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.753 LINK dif 00:04:43.753 CC examples/accel/perf/accel_perf.o 00:04:43.753 CC examples/blob/hello_world/hello_blob.o 00:04:43.753 CC examples/blob/cli/blobcli.o 00:04:43.753 LINK pmr_persistence 00:04:43.753 LINK overhead 00:04:43.753 LINK cmb_copy 00:04:43.753 LINK hello_world 00:04:43.753 LINK nvme_compliance 00:04:43.753 LINK abort 00:04:44.013 LINK arbitration 00:04:44.013 LINK reconnect 00:04:44.013 LINK hello_blob 00:04:44.013 LINK nvme_manage 00:04:44.013 LINK hotplug 00:04:44.013 LINK iscsi_fuzz 00:04:44.272 LINK cuse 00:04:44.272 LINK blobcli 00:04:44.272 CC test/bdev/bdevio/bdevio.o 00:04:44.841 LINK accel_perf 00:04:44.841 LINK bdevio 00:04:45.409 CC examples/bdev/hello_world/hello_bdev.o 00:04:45.409 CC examples/bdev/bdevperf/bdevperf.o 00:04:45.668 LINK hello_bdev 00:04:46.237 LINK bdevperf 00:04:46.805 CC examples/nvmf/nvmf/nvmf.o 00:04:47.064 LINK nvmf 00:04:48.443 LINK esnap 00:04:48.703 00:04:48.703 real 0m48.616s 00:04:48.703 user 15m14.926s 00:04:48.703 sys 3m5.772s 00:04:48.703 02:10:39 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:48.703 02:10:39 make -- common/autotest_common.sh@10 -- $ set +x 00:04:48.703 ************************************ 00:04:48.703 END TEST make 00:04:48.703 ************************************ 00:04:48.703 02:10:39 -- common/autotest_common.sh@1142 -- $ return 0 00:04:48.703 02:10:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:48.703 02:10:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:48.703 02:10:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:48.703 02:10:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.703 02:10:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:48.703 02:10:39 -- pm/common@44 -- $ pid=1683332 00:04:48.703 02:10:39 -- pm/common@50 -- $ kill -TERM 1683332 00:04:48.703 02:10:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.703 02:10:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:48.703 02:10:39 -- pm/common@44 -- $ pid=1683334 00:04:48.703 02:10:39 -- pm/common@50 -- $ kill -TERM 1683334 00:04:48.703 02:10:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.703 02:10:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:48.703 02:10:39 -- pm/common@44 -- $ pid=1683336 00:04:48.703 02:10:39 -- pm/common@50 -- $ kill -TERM 1683336 00:04:48.703 02:10:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.703 02:10:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:48.703 02:10:39 -- pm/common@44 -- $ pid=1683358 00:04:48.703 02:10:39 -- pm/common@50 -- $ sudo -E kill -TERM 1683358 00:04:48.963 02:10:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.963 02:10:39 -- nvmf/common.sh@7 -- # uname -s 00:04:48.963 02:10:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.963 02:10:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.963 02:10:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.963 02:10:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.963 02:10:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.963 02:10:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.963 02:10:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.963 02:10:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.963 02:10:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.963 02:10:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.963 02:10:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:04:48.963 02:10:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:04:48.963 02:10:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.963 02:10:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.963 02:10:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.963 02:10:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.963 02:10:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:04:48.963 02:10:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.963 02:10:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.963 02:10:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.963 02:10:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.963 02:10:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.963 02:10:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.963 02:10:39 -- paths/export.sh@5 -- # export PATH 00:04:48.963 02:10:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.963 02:10:39 -- nvmf/common.sh@47 -- # : 0 00:04:48.963 02:10:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:48.963 02:10:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:48.963 02:10:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.963 02:10:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.963 02:10:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.963 02:10:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:48.963 02:10:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:48.963 02:10:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:48.963 02:10:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:48.963 02:10:39 -- spdk/autotest.sh@32 -- # uname -s 00:04:48.963 02:10:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:48.963 02:10:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:48.963 02:10:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:04:48.963 02:10:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:48.963 02:10:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:04:48.963 02:10:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:48.963 02:10:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:48.963 02:10:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:48.963 02:10:39 -- spdk/autotest.sh@48 -- # udevadm_pid=1788534 00:04:48.963 02:10:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:48.963 02:10:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:48.963 02:10:39 -- pm/common@17 -- # local monitor 00:04:48.963 02:10:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.963 02:10:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.963 02:10:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.963 02:10:39 -- pm/common@21 -- # date +%s 00:04:48.963 02:10:39 -- pm/common@21 -- # date +%s 00:04:48.963 02:10:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.963 02:10:39 -- pm/common@25 -- # sleep 1 00:04:48.963 02:10:39 -- pm/common@21 -- # date +%s 00:04:48.963 02:10:39 -- pm/common@21 -- # date +%s 00:04:48.963 02:10:39 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:04:48.963 02:10:39 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:04:48.963 02:10:39 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:04:48.963 02:10:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:04:48.963 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656639_collect-vmstat.pm.log 00:04:48.963 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656639_collect-cpu-load.pm.log 00:04:48.963 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656639_collect-cpu-temp.pm.log 00:04:48.963 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656639_collect-bmc-pm.bmc.pm.log 00:04:49.902 02:10:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:49.902 02:10:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:49.902 02:10:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.902 02:10:40 -- common/autotest_common.sh@10 -- # set +x 00:04:49.902 02:10:40 -- spdk/autotest.sh@59 -- # create_test_list 00:04:49.902 02:10:40 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:49.902 02:10:40 -- common/autotest_common.sh@10 -- # set +x 00:04:50.162 02:10:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:04:50.162 02:10:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:04:50.162 02:10:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:04:50.162 02:10:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:04:50.162 02:10:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:04:50.162 02:10:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:50.162 02:10:40 -- common/autotest_common.sh@1455 -- # uname 00:04:50.162 02:10:40 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:50.162 02:10:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:50.162 02:10:40 -- common/autotest_common.sh@1475 -- # uname 00:04:50.162 02:10:40 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:50.162 02:10:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:50.162 02:10:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:50.162 02:10:40 -- spdk/autotest.sh@72 -- # hash lcov 00:04:50.162 02:10:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:50.162 02:10:40 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:50.162 --rc lcov_branch_coverage=1 00:04:50.162 --rc lcov_function_coverage=1 00:04:50.162 --rc genhtml_branch_coverage=1 00:04:50.162 --rc genhtml_function_coverage=1 00:04:50.162 --rc genhtml_legend=1 00:04:50.162 --rc geninfo_all_blocks=1 00:04:50.162 ' 00:04:50.162 02:10:40 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:50.162 --rc lcov_branch_coverage=1 00:04:50.162 --rc lcov_function_coverage=1 00:04:50.162 --rc genhtml_branch_coverage=1 00:04:50.162 --rc genhtml_function_coverage=1 00:04:50.162 --rc genhtml_legend=1 00:04:50.162 --rc geninfo_all_blocks=1 00:04:50.162 ' 00:04:50.162 02:10:40 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:50.162 --rc lcov_branch_coverage=1 00:04:50.162 --rc lcov_function_coverage=1 00:04:50.162 --rc genhtml_branch_coverage=1 00:04:50.162 --rc genhtml_function_coverage=1 00:04:50.162 --rc genhtml_legend=1 00:04:50.162 --rc geninfo_all_blocks=1 00:04:50.163 --no-external' 00:04:50.163 02:10:40 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:50.163 --rc lcov_branch_coverage=1 00:04:50.163 --rc lcov_function_coverage=1 00:04:50.163 --rc genhtml_branch_coverage=1 00:04:50.163 --rc genhtml_function_coverage=1 00:04:50.163 --rc genhtml_legend=1 00:04:50.163 --rc geninfo_all_blocks=1 00:04:50.163 --no-external' 00:04:50.163 02:10:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:50.163 lcov: LCOV version 1.14 00:04:50.163 02:10:40 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:05:08.257 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:08.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:26.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:26.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:26.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:26.432 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:26.433 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:26.433 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:26.433 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:26.433 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:26.433 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:26.433 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:26.433 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:33.014 02:11:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:33.014 02:11:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.014 02:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.014 02:11:22 -- spdk/autotest.sh@91 -- # rm -f 00:05:33.014 02:11:22 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.312 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:05:36.312 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:36.312 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:36.312 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:36.312 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:36.312 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:36.571 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:36.830 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:36.830 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:38.732 02:11:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:38.732 02:11:29 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:38.732 02:11:29 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:38.732 02:11:29 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:38.732 02:11:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:38.732 02:11:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:38.732 02:11:29 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:38.732 02:11:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.732 02:11:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:38.732 02:11:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:38.732 02:11:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.732 02:11:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:38.732 02:11:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:38.732 02:11:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:38.732 02:11:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:38.732 No valid GPT data, bailing 00:05:38.732 02:11:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.732 02:11:29 -- scripts/common.sh@391 -- # pt= 00:05:38.732 02:11:29 -- scripts/common.sh@392 -- # return 1 00:05:38.732 02:11:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:38.732 1+0 records in 00:05:38.732 1+0 records out 00:05:38.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608977 s, 172 MB/s 00:05:38.732 02:11:29 -- spdk/autotest.sh@118 -- # sync 00:05:38.732 02:11:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:38.732 02:11:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:38.732 02:11:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:44.044 02:11:34 -- spdk/autotest.sh@124 -- # uname -s 00:05:44.044 02:11:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:44.044 02:11:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:05:44.044 02:11:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.044 02:11:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.044 02:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.044 ************************************ 00:05:44.044 START TEST setup.sh 00:05:44.044 ************************************ 00:05:44.044 02:11:34 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:05:44.044 * Looking for test storage... 00:05:44.044 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:44.044 02:11:34 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:44.044 02:11:34 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:44.044 02:11:34 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:05:44.044 02:11:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.044 02:11:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.044 02:11:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:44.044 ************************************ 00:05:44.044 START TEST acl 00:05:44.044 ************************************ 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:05:44.044 * Looking for test storage... 00:05:44.044 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:44.044 02:11:34 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:44.044 02:11:34 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:44.044 02:11:34 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:44.044 02:11:34 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:44.044 02:11:34 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:44.044 02:11:34 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:44.044 02:11:34 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:44.044 02:11:34 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.044 02:11:34 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:50.616 02:11:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:50.616 02:11:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:50.616 02:11:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:50.616 02:11:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:50.616 02:11:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.616 02:11:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:05:54.808 Hugepages 00:05:54.808 node hugesize free / total 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.808 00:05:54.808 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:54.808 02:11:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.808 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:54.809 02:11:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:55.068 02:11:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:55.068 02:11:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:55.068 02:11:45 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.068 02:11:45 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.068 02:11:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:55.068 ************************************ 00:05:55.068 START TEST denied 00:05:55.068 ************************************ 00:05:55.068 02:11:45 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:55.068 02:11:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:05:55.068 02:11:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:55.068 02:11:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:05:55.068 02:11:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.068 02:11:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:06:01.640 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:01.640 02:11:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:09.871 00:06:09.871 real 0m13.722s 00:06:09.871 user 0m4.302s 00:06:09.871 sys 0m8.674s 00:06:09.871 02:11:58 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.871 02:11:58 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:09.871 ************************************ 00:06:09.871 END TEST denied 00:06:09.871 ************************************ 00:06:09.871 02:11:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:09.871 02:11:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:09.871 02:11:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.871 02:11:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.871 02:11:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:09.871 ************************************ 00:06:09.871 START TEST allowed 00:06:09.871 ************************************ 00:06:09.871 02:11:59 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:06:09.871 02:11:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:06:09.871 02:11:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:06:09.871 02:11:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:09.871 02:11:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.871 02:11:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:06:19.852 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:19.852 02:12:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:06:19.852 02:12:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:19.852 02:12:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:19.852 02:12:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:19.852 02:12:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:25.129 00:06:25.129 real 0m15.965s 00:06:25.129 user 0m4.215s 00:06:25.129 sys 0m8.622s 00:06:25.129 02:12:15 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.129 02:12:15 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:25.129 ************************************ 00:06:25.129 END TEST allowed 00:06:25.129 ************************************ 00:06:25.129 02:12:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:25.129 00:06:25.129 real 0m40.756s 00:06:25.129 user 0m12.304s 00:06:25.129 sys 0m24.865s 00:06:25.129 02:12:15 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.129 02:12:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:25.129 ************************************ 00:06:25.129 END TEST acl 00:06:25.129 ************************************ 00:06:25.129 02:12:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:25.129 02:12:15 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:06:25.129 02:12:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.129 02:12:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.129 02:12:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:25.129 ************************************ 00:06:25.129 START TEST hugepages 00:06:25.129 ************************************ 00:06:25.129 02:12:15 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:06:25.129 * Looking for test storage... 00:06:25.129 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 67077392 kB' 'MemAvailable: 71999876 kB' 'Buffers: 9896 kB' 'Cached: 17432020 kB' 'SwapCached: 0 kB' 'Active: 13289012 kB' 'Inactive: 4702844 kB' 'Active(anon): 12776424 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553248 kB' 'Mapped: 177288 kB' 'Shmem: 12226484 kB' 'KReclaimable: 543020 kB' 'Slab: 964852 kB' 'SReclaimable: 543020 kB' 'SUnreclaim: 421832 kB' 'KernelStack: 16480 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 14197660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214200 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.129 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.130 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:25.131 02:12:15 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:25.131 02:12:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.131 02:12:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.131 02:12:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:25.131 ************************************ 00:06:25.131 START TEST default_setup 00:06:25.131 ************************************ 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:25.131 02:12:15 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:29.330 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:29.330 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:32.620 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.527 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69206740 kB' 'MemAvailable: 74129120 kB' 'Buffers: 9896 kB' 'Cached: 17432180 kB' 'SwapCached: 0 kB' 'Active: 13307680 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795092 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571708 kB' 'Mapped: 177272 kB' 'Shmem: 12226644 kB' 'KReclaimable: 542916 kB' 'Slab: 963080 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420164 kB' 'KernelStack: 16416 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14218468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214376 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.528 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69207816 kB' 'MemAvailable: 74130196 kB' 'Buffers: 9896 kB' 'Cached: 17432184 kB' 'SwapCached: 0 kB' 'Active: 13307608 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795020 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571720 kB' 'Mapped: 177272 kB' 'Shmem: 12226648 kB' 'KReclaimable: 542916 kB' 'Slab: 963064 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420148 kB' 'KernelStack: 16448 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14216184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214296 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.529 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.530 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69208324 kB' 'MemAvailable: 74130704 kB' 'Buffers: 9896 kB' 'Cached: 17432184 kB' 'SwapCached: 0 kB' 'Active: 13307232 kB' 'Inactive: 4702844 kB' 'Active(anon): 12794644 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571304 kB' 'Mapped: 177272 kB' 'Shmem: 12226648 kB' 'KReclaimable: 542916 kB' 'Slab: 963064 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420148 kB' 'KernelStack: 16464 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14216204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214296 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.531 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.532 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:34.533 nr_hugepages=1024 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:34.533 resv_hugepages=0 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:34.533 surplus_hugepages=0 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:34.533 anon_hugepages=0 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69209280 kB' 'MemAvailable: 74131660 kB' 'Buffers: 9896 kB' 'Cached: 17432240 kB' 'SwapCached: 0 kB' 'Active: 13307836 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795248 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571868 kB' 'Mapped: 177272 kB' 'Shmem: 12226704 kB' 'KReclaimable: 542916 kB' 'Slab: 963064 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420148 kB' 'KernelStack: 16480 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14254768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214280 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.533 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.534 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 32482216 kB' 'MemUsed: 15587696 kB' 'SwapCached: 0 kB' 'Active: 8312428 kB' 'Inactive: 3441204 kB' 'Active(anon): 7947236 kB' 'Inactive(anon): 0 kB' 'Active(file): 365192 kB' 'Inactive(file): 3441204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11420456 kB' 'Mapped: 124652 kB' 'AnonPages: 336352 kB' 'Shmem: 7614060 kB' 'KernelStack: 9688 kB' 'PageTables: 5608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180068 kB' 'Slab: 404520 kB' 'SReclaimable: 180068 kB' 'SUnreclaim: 224452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.535 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:34.536 node0=1024 expecting 1024 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:34.536 00:06:34.536 real 0m9.544s 00:06:34.536 user 0m2.231s 00:06:34.536 sys 0m4.290s 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.536 02:12:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:34.536 ************************************ 00:06:34.536 END TEST default_setup 00:06:34.536 ************************************ 00:06:34.795 02:12:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:34.795 02:12:24 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:34.795 02:12:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.795 02:12:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.795 02:12:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:34.795 ************************************ 00:06:34.795 START TEST per_node_1G_alloc 00:06:34.795 ************************************ 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:34.795 02:12:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:38.983 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:38.984 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:38.984 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69230956 kB' 'MemAvailable: 74153336 kB' 'Buffers: 9896 kB' 'Cached: 17432352 kB' 'SwapCached: 0 kB' 'Active: 13308940 kB' 'Inactive: 4702844 kB' 'Active(anon): 12796352 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572560 kB' 'Mapped: 177644 kB' 'Shmem: 12226816 kB' 'KReclaimable: 542916 kB' 'Slab: 963588 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420672 kB' 'KernelStack: 16640 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14241356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214520 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.044 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69231928 kB' 'MemAvailable: 74154308 kB' 'Buffers: 9896 kB' 'Cached: 17432352 kB' 'SwapCached: 0 kB' 'Active: 13308708 kB' 'Inactive: 4702844 kB' 'Active(anon): 12796120 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572836 kB' 'Mapped: 177652 kB' 'Shmem: 12226816 kB' 'KReclaimable: 542916 kB' 'Slab: 963600 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420684 kB' 'KernelStack: 16672 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14242868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214584 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:41.045 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.046 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69230756 kB' 'MemAvailable: 74153136 kB' 'Buffers: 9896 kB' 'Cached: 17432356 kB' 'SwapCached: 0 kB' 'Active: 13308688 kB' 'Inactive: 4702844 kB' 'Active(anon): 12796100 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572744 kB' 'Mapped: 177644 kB' 'Shmem: 12226820 kB' 'KReclaimable: 542916 kB' 'Slab: 963600 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420684 kB' 'KernelStack: 16592 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14243020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214536 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.047 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.048 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:41.049 nr_hugepages=1024 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:41.049 resv_hugepages=0 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:41.049 surplus_hugepages=0 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:41.049 anon_hugepages=0 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69229580 kB' 'MemAvailable: 74151960 kB' 'Buffers: 9896 kB' 'Cached: 17432376 kB' 'SwapCached: 0 kB' 'Active: 13308900 kB' 'Inactive: 4702844 kB' 'Active(anon): 12796312 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572908 kB' 'Mapped: 177644 kB' 'Shmem: 12226840 kB' 'KReclaimable: 542916 kB' 'Slab: 963600 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 420684 kB' 'KernelStack: 16672 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14243040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214552 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.049 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.050 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 33538116 kB' 'MemUsed: 14531796 kB' 'SwapCached: 0 kB' 'Active: 8313872 kB' 'Inactive: 3441204 kB' 'Active(anon): 7948680 kB' 'Inactive(anon): 0 kB' 'Active(file): 365192 kB' 'Inactive(file): 3441204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11420532 kB' 'Mapped: 124868 kB' 'AnonPages: 337340 kB' 'Shmem: 7614136 kB' 'KernelStack: 9736 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180068 kB' 'Slab: 405112 kB' 'SReclaimable: 180068 kB' 'SUnreclaim: 225044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.051 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.052 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223616 kB' 'MemFree: 35691220 kB' 'MemUsed: 8532396 kB' 'SwapCached: 0 kB' 'Active: 4994804 kB' 'Inactive: 1261640 kB' 'Active(anon): 4847408 kB' 'Inactive(anon): 0 kB' 'Active(file): 147396 kB' 'Inactive(file): 1261640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6021740 kB' 'Mapped: 52776 kB' 'AnonPages: 234840 kB' 'Shmem: 4612704 kB' 'KernelStack: 6776 kB' 'PageTables: 2876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362848 kB' 'Slab: 558488 kB' 'SReclaimable: 362848 kB' 'SUnreclaim: 195640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.053 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:41.054 node0=512 expecting 512 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:41.054 node1=512 expecting 512 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:41.054 00:06:41.054 real 0m6.267s 00:06:41.054 user 0m2.263s 00:06:41.054 sys 0m4.089s 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.054 02:12:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:41.054 ************************************ 00:06:41.054 END TEST per_node_1G_alloc 00:06:41.054 ************************************ 00:06:41.054 02:12:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:41.054 02:12:31 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:41.054 02:12:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.054 02:12:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.054 02:12:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:41.054 ************************************ 00:06:41.054 START TEST even_2G_alloc 00:06:41.054 ************************************ 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:41.054 02:12:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:45.245 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:45.245 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:45.245 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69230248 kB' 'MemAvailable: 74152628 kB' 'Buffers: 9896 kB' 'Cached: 17432544 kB' 'SwapCached: 0 kB' 'Active: 13308112 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795524 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571780 kB' 'Mapped: 176700 kB' 'Shmem: 12227008 kB' 'KReclaimable: 542916 kB' 'Slab: 964136 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 421220 kB' 'KernelStack: 16384 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14207320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214344 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.154 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.155 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69230992 kB' 'MemAvailable: 74153372 kB' 'Buffers: 9896 kB' 'Cached: 17432548 kB' 'SwapCached: 0 kB' 'Active: 13308452 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795864 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572176 kB' 'Mapped: 176684 kB' 'Shmem: 12227012 kB' 'KReclaimable: 542916 kB' 'Slab: 964136 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 421220 kB' 'KernelStack: 16432 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14207336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214312 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.156 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69230768 kB' 'MemAvailable: 74153148 kB' 'Buffers: 9896 kB' 'Cached: 17432552 kB' 'SwapCached: 0 kB' 'Active: 13308148 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795560 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571856 kB' 'Mapped: 176684 kB' 'Shmem: 12227016 kB' 'KReclaimable: 542916 kB' 'Slab: 964136 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 421220 kB' 'KernelStack: 16432 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14207360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214312 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.157 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.158 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:47.159 nr_hugepages=1024 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:47.159 resv_hugepages=0 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:47.159 surplus_hugepages=0 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:47.159 anon_hugepages=0 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69230768 kB' 'MemAvailable: 74153148 kB' 'Buffers: 9896 kB' 'Cached: 17432608 kB' 'SwapCached: 0 kB' 'Active: 13308188 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795600 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571776 kB' 'Mapped: 176684 kB' 'Shmem: 12227072 kB' 'KReclaimable: 542916 kB' 'Slab: 964136 kB' 'SReclaimable: 542916 kB' 'SUnreclaim: 421220 kB' 'KernelStack: 16416 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14207380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214312 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.159 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.160 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 33536488 kB' 'MemUsed: 14533424 kB' 'SwapCached: 0 kB' 'Active: 8311904 kB' 'Inactive: 3441204 kB' 'Active(anon): 7946712 kB' 'Inactive(anon): 0 kB' 'Active(file): 365192 kB' 'Inactive(file): 3441204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11420664 kB' 'Mapped: 124020 kB' 'AnonPages: 335560 kB' 'Shmem: 7614268 kB' 'KernelStack: 9608 kB' 'PageTables: 5380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180068 kB' 'Slab: 405028 kB' 'SReclaimable: 180068 kB' 'SUnreclaim: 224960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.161 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.422 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.423 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223616 kB' 'MemFree: 35693952 kB' 'MemUsed: 8529664 kB' 'SwapCached: 0 kB' 'Active: 4996528 kB' 'Inactive: 1261640 kB' 'Active(anon): 4849132 kB' 'Inactive(anon): 0 kB' 'Active(file): 147396 kB' 'Inactive(file): 1261640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6021864 kB' 'Mapped: 52664 kB' 'AnonPages: 236488 kB' 'Shmem: 4612828 kB' 'KernelStack: 6792 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362848 kB' 'Slab: 559108 kB' 'SReclaimable: 362848 kB' 'SUnreclaim: 196260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.424 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:47.425 node0=512 expecting 512 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:47.425 node1=512 expecting 512 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:47.425 00:06:47.425 real 0m6.242s 00:06:47.425 user 0m2.130s 00:06:47.425 sys 0m4.187s 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.425 02:12:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:47.425 ************************************ 00:06:47.425 END TEST even_2G_alloc 00:06:47.426 ************************************ 00:06:47.426 02:12:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:47.426 02:12:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:47.426 02:12:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.426 02:12:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.426 02:12:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:47.426 ************************************ 00:06:47.426 START TEST odd_alloc 00:06:47.426 ************************************ 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:47.426 02:12:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:51.616 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:51.616 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:51.616 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:53.522 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69236308 kB' 'MemAvailable: 74158656 kB' 'Buffers: 9896 kB' 'Cached: 17432744 kB' 'SwapCached: 0 kB' 'Active: 13305608 kB' 'Inactive: 4702844 kB' 'Active(anon): 12793020 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569036 kB' 'Mapped: 176756 kB' 'Shmem: 12227208 kB' 'KReclaimable: 542884 kB' 'Slab: 964152 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421268 kB' 'KernelStack: 16464 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14208156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214280 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.523 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69235460 kB' 'MemAvailable: 74157808 kB' 'Buffers: 9896 kB' 'Cached: 17432752 kB' 'SwapCached: 0 kB' 'Active: 13306056 kB' 'Inactive: 4702844 kB' 'Active(anon): 12793468 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569576 kB' 'Mapped: 176756 kB' 'Shmem: 12227216 kB' 'KReclaimable: 542884 kB' 'Slab: 964152 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421268 kB' 'KernelStack: 16448 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14208172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.524 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69235412 kB' 'MemAvailable: 74157760 kB' 'Buffers: 9896 kB' 'Cached: 17432760 kB' 'SwapCached: 0 kB' 'Active: 13306084 kB' 'Inactive: 4702844 kB' 'Active(anon): 12793496 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569568 kB' 'Mapped: 176756 kB' 'Shmem: 12227224 kB' 'KReclaimable: 542884 kB' 'Slab: 964152 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421268 kB' 'KernelStack: 16448 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14208192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.525 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:53.526 nr_hugepages=1025 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:53.526 resv_hugepages=0 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:53.526 surplus_hugepages=0 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:53.526 anon_hugepages=0 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:53.526 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69236452 kB' 'MemAvailable: 74158800 kB' 'Buffers: 9896 kB' 'Cached: 17432784 kB' 'SwapCached: 0 kB' 'Active: 13306040 kB' 'Inactive: 4702844 kB' 'Active(anon): 12793452 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569468 kB' 'Mapped: 176756 kB' 'Shmem: 12227248 kB' 'KReclaimable: 542884 kB' 'Slab: 964152 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421268 kB' 'KernelStack: 16432 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14208212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.527 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 33526288 kB' 'MemUsed: 14543624 kB' 'SwapCached: 0 kB' 'Active: 8312696 kB' 'Inactive: 3441204 kB' 'Active(anon): 7947504 kB' 'Inactive(anon): 0 kB' 'Active(file): 365192 kB' 'Inactive(file): 3441204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11420752 kB' 'Mapped: 124020 kB' 'AnonPages: 336372 kB' 'Shmem: 7614356 kB' 'KernelStack: 9720 kB' 'PageTables: 5620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180052 kB' 'Slab: 405520 kB' 'SReclaimable: 180052 kB' 'SUnreclaim: 225468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.528 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223616 kB' 'MemFree: 35709156 kB' 'MemUsed: 8514460 kB' 'SwapCached: 0 kB' 'Active: 4993344 kB' 'Inactive: 1261640 kB' 'Active(anon): 4845948 kB' 'Inactive(anon): 0 kB' 'Active(file): 147396 kB' 'Inactive(file): 1261640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6021928 kB' 'Mapped: 52736 kB' 'AnonPages: 233096 kB' 'Shmem: 4612892 kB' 'KernelStack: 6712 kB' 'PageTables: 2888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362832 kB' 'Slab: 558632 kB' 'SReclaimable: 362832 kB' 'SUnreclaim: 195800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.529 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:53.789 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:53.790 node0=512 expecting 513 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:53.790 node1=513 expecting 512 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:53.790 00:06:53.790 real 0m6.241s 00:06:53.790 user 0m2.218s 00:06:53.790 sys 0m4.103s 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.790 02:12:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:53.790 ************************************ 00:06:53.790 END TEST odd_alloc 00:06:53.790 ************************************ 00:06:53.790 02:12:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:53.790 02:12:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:53.790 02:12:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.790 02:12:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.790 02:12:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:53.790 ************************************ 00:06:53.790 START TEST custom_alloc 00:06:53.790 ************************************ 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:53.790 02:12:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:57.979 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:57.979 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:57.979 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.888 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 68192316 kB' 'MemAvailable: 73114664 kB' 'Buffers: 9896 kB' 'Cached: 17432932 kB' 'SwapCached: 0 kB' 'Active: 13307604 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795016 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571196 kB' 'Mapped: 176916 kB' 'Shmem: 12227396 kB' 'KReclaimable: 542884 kB' 'Slab: 964008 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421124 kB' 'KernelStack: 16400 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14211396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.889 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 68193252 kB' 'MemAvailable: 73115600 kB' 'Buffers: 9896 kB' 'Cached: 17432932 kB' 'SwapCached: 0 kB' 'Active: 13308016 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795428 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571568 kB' 'Mapped: 176916 kB' 'Shmem: 12227396 kB' 'KReclaimable: 542884 kB' 'Slab: 964012 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421128 kB' 'KernelStack: 16656 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14211176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214408 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.890 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.891 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 68192660 kB' 'MemAvailable: 73115008 kB' 'Buffers: 9896 kB' 'Cached: 17432952 kB' 'SwapCached: 0 kB' 'Active: 13307204 kB' 'Inactive: 4702844 kB' 'Active(anon): 12794616 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570656 kB' 'Mapped: 176760 kB' 'Shmem: 12227416 kB' 'KReclaimable: 542884 kB' 'Slab: 963968 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421084 kB' 'KernelStack: 16432 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14211436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214424 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.892 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.893 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:59.894 nr_hugepages=1536 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:59.894 resv_hugepages=0 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:59.894 surplus_hugepages=0 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:59.894 anon_hugepages=0 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 68191700 kB' 'MemAvailable: 73114048 kB' 'Buffers: 9896 kB' 'Cached: 17432952 kB' 'SwapCached: 0 kB' 'Active: 13308032 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795444 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571484 kB' 'Mapped: 176760 kB' 'Shmem: 12227416 kB' 'KReclaimable: 542884 kB' 'Slab: 963968 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421084 kB' 'KernelStack: 16608 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14209968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214376 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.894 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.895 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 33531672 kB' 'MemUsed: 14538240 kB' 'SwapCached: 0 kB' 'Active: 8312028 kB' 'Inactive: 3441204 kB' 'Active(anon): 7946836 kB' 'Inactive(anon): 0 kB' 'Active(file): 365192 kB' 'Inactive(file): 3441204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11420892 kB' 'Mapped: 123944 kB' 'AnonPages: 335440 kB' 'Shmem: 7614496 kB' 'KernelStack: 9832 kB' 'PageTables: 5880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180052 kB' 'Slab: 404980 kB' 'SReclaimable: 180052 kB' 'SUnreclaim: 224928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.896 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223616 kB' 'MemFree: 34659216 kB' 'MemUsed: 9564400 kB' 'SwapCached: 0 kB' 'Active: 4995828 kB' 'Inactive: 1261640 kB' 'Active(anon): 4848432 kB' 'Inactive(anon): 0 kB' 'Active(file): 147396 kB' 'Inactive(file): 1261640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6022024 kB' 'Mapped: 52816 kB' 'AnonPages: 235760 kB' 'Shmem: 4612988 kB' 'KernelStack: 6744 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362832 kB' 'Slab: 558992 kB' 'SReclaimable: 362832 kB' 'SUnreclaim: 196160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.897 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:59.898 node0=512 expecting 512 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:59.898 node1=1024 expecting 1024 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:59.898 00:06:59.898 real 0m6.223s 00:06:59.898 user 0m2.121s 00:06:59.898 sys 0m4.176s 00:06:59.898 02:12:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.899 02:12:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:59.899 ************************************ 00:06:59.899 END TEST custom_alloc 00:06:59.899 ************************************ 00:06:59.899 02:12:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:59.899 02:12:50 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:59.899 02:12:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.899 02:12:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.899 02:12:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:00.158 ************************************ 00:07:00.158 START TEST no_shrink_alloc 00:07:00.158 ************************************ 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:00.158 02:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:07:04.347 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:04.347 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:04.348 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:04.348 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69229224 kB' 'MemAvailable: 74151572 kB' 'Buffers: 9896 kB' 'Cached: 17433120 kB' 'SwapCached: 0 kB' 'Active: 13308476 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795888 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571612 kB' 'Mapped: 176912 kB' 'Shmem: 12227584 kB' 'KReclaimable: 542884 kB' 'Slab: 963972 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421088 kB' 'KernelStack: 16448 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14210180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214424 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.256 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.257 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69229076 kB' 'MemAvailable: 74151424 kB' 'Buffers: 9896 kB' 'Cached: 17433136 kB' 'SwapCached: 0 kB' 'Active: 13307784 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795196 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570932 kB' 'Mapped: 176904 kB' 'Shmem: 12227600 kB' 'KReclaimable: 542884 kB' 'Slab: 964024 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421140 kB' 'KernelStack: 16448 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14210200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.258 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.259 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69228824 kB' 'MemAvailable: 74151172 kB' 'Buffers: 9896 kB' 'Cached: 17433140 kB' 'SwapCached: 0 kB' 'Active: 13308180 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795592 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571348 kB' 'Mapped: 176904 kB' 'Shmem: 12227604 kB' 'KReclaimable: 542884 kB' 'Slab: 964024 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421140 kB' 'KernelStack: 16464 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14210220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.260 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.261 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:06.262 nr_hugepages=1024 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:06.262 resv_hugepages=0 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:06.262 surplus_hugepages=0 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:06.262 anon_hugepages=0 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69229272 kB' 'MemAvailable: 74151620 kB' 'Buffers: 9896 kB' 'Cached: 17433180 kB' 'SwapCached: 0 kB' 'Active: 13307828 kB' 'Inactive: 4702844 kB' 'Active(anon): 12795240 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570932 kB' 'Mapped: 176904 kB' 'Shmem: 12227644 kB' 'KReclaimable: 542884 kB' 'Slab: 964016 kB' 'SReclaimable: 542884 kB' 'SUnreclaim: 421132 kB' 'KernelStack: 16448 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14210244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.262 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.263 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 32496116 kB' 'MemUsed: 15573796 kB' 'SwapCached: 0 kB' 'Active: 8311416 kB' 'Inactive: 3441204 kB' 'Active(anon): 7946224 kB' 'Inactive(anon): 0 kB' 'Active(file): 365192 kB' 'Inactive(file): 3441204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11420888 kB' 'Mapped: 124024 kB' 'AnonPages: 334812 kB' 'Shmem: 7614492 kB' 'KernelStack: 9656 kB' 'PageTables: 5384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180052 kB' 'Slab: 405292 kB' 'SReclaimable: 180052 kB' 'SUnreclaim: 225240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.264 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.265 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:06.266 node0=1024 expecting 1024 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:06.266 02:12:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:07:10.471 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:10.471 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:10.471 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:12.383 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69260348 kB' 'MemAvailable: 74182688 kB' 'Buffers: 9896 kB' 'Cached: 17433292 kB' 'SwapCached: 0 kB' 'Active: 13309616 kB' 'Inactive: 4702844 kB' 'Active(anon): 12797028 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572596 kB' 'Mapped: 176968 kB' 'Shmem: 12227756 kB' 'KReclaimable: 542876 kB' 'Slab: 962840 kB' 'SReclaimable: 542876 kB' 'SUnreclaim: 419964 kB' 'KernelStack: 16560 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14213720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214568 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.383 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.384 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69264748 kB' 'MemAvailable: 74187088 kB' 'Buffers: 9896 kB' 'Cached: 17433292 kB' 'SwapCached: 0 kB' 'Active: 13310496 kB' 'Inactive: 4702844 kB' 'Active(anon): 12797908 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573368 kB' 'Mapped: 176968 kB' 'Shmem: 12227756 kB' 'KReclaimable: 542876 kB' 'Slab: 962984 kB' 'SReclaimable: 542876 kB' 'SUnreclaim: 420108 kB' 'KernelStack: 16720 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14213736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214616 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.385 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.386 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69264880 kB' 'MemAvailable: 74187188 kB' 'Buffers: 9896 kB' 'Cached: 17433312 kB' 'SwapCached: 0 kB' 'Active: 13309748 kB' 'Inactive: 4702844 kB' 'Active(anon): 12797160 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572648 kB' 'Mapped: 176960 kB' 'Shmem: 12227776 kB' 'KReclaimable: 542844 kB' 'Slab: 962888 kB' 'SReclaimable: 542844 kB' 'SUnreclaim: 420044 kB' 'KernelStack: 16544 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14213392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214520 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.387 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:12.388 nr_hugepages=1024 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:12.388 resv_hugepages=0 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:12.388 surplus_hugepages=0 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:12.388 anon_hugepages=0 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:12.388 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69264180 kB' 'MemAvailable: 74186488 kB' 'Buffers: 9896 kB' 'Cached: 17433312 kB' 'SwapCached: 0 kB' 'Active: 13309540 kB' 'Inactive: 4702844 kB' 'Active(anon): 12796952 kB' 'Inactive(anon): 0 kB' 'Active(file): 512588 kB' 'Inactive(file): 4702844 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572440 kB' 'Mapped: 176960 kB' 'Shmem: 12227776 kB' 'KReclaimable: 542844 kB' 'Slab: 962888 kB' 'SReclaimable: 542844 kB' 'SUnreclaim: 420044 kB' 'KernelStack: 16624 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14213784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214584 kB' 'VmallocChunk: 0 kB' 'Percpu: 74560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1514944 kB' 'DirectMap2M: 24375296 kB' 'DirectMap1G: 75497472 kB' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.389 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 32501092 kB' 'MemUsed: 15568820 kB' 'SwapCached: 0 kB' 'Active: 8312304 kB' 'Inactive: 3441204 kB' 'Active(anon): 7947112 kB' 'Inactive(anon): 0 kB' 'Active(file): 365192 kB' 'Inactive(file): 3441204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11420960 kB' 'Mapped: 124020 kB' 'AnonPages: 335756 kB' 'Shmem: 7614564 kB' 'KernelStack: 9704 kB' 'PageTables: 5524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180012 kB' 'Slab: 404740 kB' 'SReclaimable: 180012 kB' 'SUnreclaim: 224728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.390 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.391 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:12.392 node0=1024 expecting 1024 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:12.392 00:07:12.392 real 0m12.212s 00:07:12.392 user 0m4.208s 00:07:12.392 sys 0m8.157s 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.392 02:13:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:12.392 ************************************ 00:07:12.392 END TEST no_shrink_alloc 00:07:12.392 ************************************ 00:07:12.392 02:13:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:12.392 02:13:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:12.392 00:07:12.392 real 0m47.435s 00:07:12.392 user 0m15.438s 00:07:12.392 sys 0m29.497s 00:07:12.392 02:13:02 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.392 02:13:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:12.392 ************************************ 00:07:12.392 END TEST hugepages 00:07:12.392 ************************************ 00:07:12.392 02:13:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:12.392 02:13:02 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:07:12.392 02:13:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.392 02:13:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.392 02:13:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:12.392 ************************************ 00:07:12.392 START TEST driver 00:07:12.392 ************************************ 00:07:12.392 02:13:02 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:07:12.392 * Looking for test storage... 00:07:12.651 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:07:12.651 02:13:02 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:12.651 02:13:02 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:12.651 02:13:02 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:20.770 02:13:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:20.770 02:13:10 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.770 02:13:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.770 02:13:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:20.770 ************************************ 00:07:20.770 START TEST guess_driver 00:07:20.770 ************************************ 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 238 > 0 )) 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:20.770 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:20.770 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:20.770 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:20.770 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:20.770 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:20.770 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:20.770 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:20.770 Looking for driver=vfio-pci 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:20.770 02:13:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.054 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.055 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.055 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.055 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.055 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.055 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.055 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.055 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:24.314 02:13:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:27.603 02:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:27.603 02:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:27.603 02:13:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:29.513 02:13:19 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:29.513 02:13:19 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:29.513 02:13:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:29.513 02:13:19 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:37.760 00:07:37.760 real 0m16.825s 00:07:37.760 user 0m4.269s 00:07:37.760 sys 0m8.716s 00:07:37.760 02:13:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.760 02:13:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 ************************************ 00:07:37.760 END TEST guess_driver 00:07:37.760 ************************************ 00:07:37.760 02:13:27 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:07:37.760 00:07:37.760 real 0m24.485s 00:07:37.760 user 0m6.517s 00:07:37.760 sys 0m13.314s 00:07:37.760 02:13:27 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.760 02:13:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 ************************************ 00:07:37.760 END TEST driver 00:07:37.760 ************************************ 00:07:37.760 02:13:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:07:37.760 02:13:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:07:37.760 02:13:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.760 02:13:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.760 02:13:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:37.760 ************************************ 00:07:37.760 START TEST devices 00:07:37.760 ************************************ 00:07:37.760 02:13:27 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:07:37.760 * Looking for test storage... 00:07:37.760 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:07:37.760 02:13:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:37.760 02:13:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:37.760 02:13:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:37.760 02:13:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:44.338 02:13:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:44.338 No valid GPT data, bailing 00:07:44.338 02:13:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:44.338 02:13:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.338 02:13:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:44.338 02:13:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.338 02:13:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:44.338 ************************************ 00:07:44.338 START TEST nvme_mount 00:07:44.338 ************************************ 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:44.338 02:13:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:44.597 Creating new GPT entries in memory. 00:07:44.597 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:44.597 other utilities. 00:07:44.597 02:13:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:44.597 02:13:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:44.597 02:13:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:44.597 02:13:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:44.597 02:13:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:45.979 Creating new GPT entries in memory. 00:07:45.979 The operation has completed successfully. 00:07:45.979 02:13:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:45.979 02:13:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:45.979 02:13:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1829740 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:45.979 02:13:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.176 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:50.177 02:13:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:52.083 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:52.083 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:52.083 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:52.083 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:52.083 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:52.083 02:13:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.279 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:56.280 02:13:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:58.187 02:13:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:02.382 02:13:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:04.285 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:04.285 00:08:04.285 real 0m20.444s 00:08:04.285 user 0m5.874s 00:08:04.285 sys 0m12.423s 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.285 02:13:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:08:04.285 ************************************ 00:08:04.285 END TEST nvme_mount 00:08:04.285 ************************************ 00:08:04.285 02:13:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:04.285 02:13:54 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:04.285 02:13:54 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.285 02:13:54 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.285 02:13:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:04.285 ************************************ 00:08:04.285 START TEST dm_mount 00:08:04.285 ************************************ 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:04.285 02:13:54 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:05.219 Creating new GPT entries in memory. 00:08:05.219 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:05.219 other utilities. 00:08:05.219 02:13:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:05.219 02:13:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:05.219 02:13:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:05.219 02:13:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:05.219 02:13:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:08:06.156 Creating new GPT entries in memory. 00:08:06.156 The operation has completed successfully. 00:08:06.156 02:13:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:06.156 02:13:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:06.156 02:13:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:06.156 02:13:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:06.156 02:13:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:08:07.536 The operation has completed successfully. 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1835093 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:07.537 02:13:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.736 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:11.737 02:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:13.645 02:14:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.919 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.920 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.920 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.920 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.920 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.920 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:17.920 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:08:17.920 02:14:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:19.300 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:19.300 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:19.300 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:08:19.300 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:08:19.300 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:19.300 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:19.300 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:19.560 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:19.560 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:19.560 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:19.560 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:19.560 02:14:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:19.560 00:08:19.560 real 0m15.290s 00:08:19.560 user 0m4.063s 00:08:19.560 sys 0m8.267s 00:08:19.560 02:14:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.560 02:14:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:08:19.560 ************************************ 00:08:19.560 END TEST dm_mount 00:08:19.560 ************************************ 00:08:19.560 02:14:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:19.560 02:14:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:08:19.560 02:14:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:08:19.560 02:14:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:08:19.560 02:14:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:19.560 02:14:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:19.560 02:14:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:19.560 02:14:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:19.821 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:08:19.821 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:08:19.821 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:19.821 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:19.821 02:14:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:08:19.821 02:14:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:08:19.821 02:14:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:19.821 02:14:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:19.821 02:14:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:19.821 02:14:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:19.821 02:14:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:19.821 00:08:19.821 real 0m42.830s 00:08:19.821 user 0m12.265s 00:08:19.821 sys 0m25.359s 00:08:19.821 02:14:10 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.821 02:14:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:19.821 ************************************ 00:08:19.821 END TEST devices 00:08:19.821 ************************************ 00:08:19.821 02:14:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:08:19.821 00:08:19.821 real 2m35.961s 00:08:19.821 user 0m46.686s 00:08:19.821 sys 1m33.365s 00:08:19.821 02:14:10 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.821 02:14:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:19.821 ************************************ 00:08:19.821 END TEST setup.sh 00:08:19.821 ************************************ 00:08:19.821 02:14:10 -- common/autotest_common.sh@1142 -- # return 0 00:08:19.821 02:14:10 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:08:24.023 Hugepages 00:08:24.023 node hugesize free / total 00:08:24.023 node0 1048576kB 0 / 0 00:08:24.023 node0 2048kB 1024 / 1024 00:08:24.023 node1 1048576kB 0 / 0 00:08:24.023 node1 2048kB 1024 / 1024 00:08:24.023 00:08:24.023 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:24.023 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:24.023 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:24.023 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:24.023 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:24.023 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:24.023 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:24.023 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:24.023 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:24.023 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:24.023 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:24.023 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:24.023 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:24.023 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:24.023 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:24.023 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:24.023 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:24.023 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:24.283 02:14:14 -- spdk/autotest.sh@130 -- # uname -s 00:08:24.283 02:14:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:08:24.283 02:14:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:08:24.283 02:14:14 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:08:28.479 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:28.479 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:31.771 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:08:33.679 02:14:23 -- common/autotest_common.sh@1532 -- # sleep 1 00:08:34.615 02:14:24 -- common/autotest_common.sh@1533 -- # bdfs=() 00:08:34.615 02:14:24 -- common/autotest_common.sh@1533 -- # local bdfs 00:08:34.615 02:14:24 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:08:34.615 02:14:24 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:08:34.615 02:14:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:34.615 02:14:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:34.615 02:14:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:34.615 02:14:24 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:34.615 02:14:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:34.615 02:14:24 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:34.615 02:14:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:08:34.615 02:14:24 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:08:38.803 Waiting for block devices as requested 00:08:38.804 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:08:38.804 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:38.804 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:39.063 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:39.063 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:39.322 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:39.323 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:39.323 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:39.582 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:39.582 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:39.582 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:39.842 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:39.842 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:39.842 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:40.101 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:40.101 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:40.101 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:42.640 02:14:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:42.640 02:14:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1502 -- # grep 0000:1a:00.0/nvme/nvme 00:08:42.640 02:14:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:08:42.640 02:14:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:08:42.640 02:14:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:42.640 02:14:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:42.640 02:14:32 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:08:42.640 02:14:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:42.640 02:14:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:42.640 02:14:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:08:42.640 02:14:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:42.640 02:14:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:42.640 02:14:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:42.640 02:14:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:42.640 02:14:32 -- common/autotest_common.sh@1557 -- # continue 00:08:42.640 02:14:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:42.641 02:14:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.641 02:14:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.641 02:14:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:42.641 02:14:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.641 02:14:32 -- common/autotest_common.sh@10 -- # set +x 00:08:42.641 02:14:32 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:08:46.838 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:46.838 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:50.131 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:08:51.517 02:14:41 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:51.517 02:14:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.517 02:14:41 -- common/autotest_common.sh@10 -- # set +x 00:08:51.777 02:14:41 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:51.777 02:14:41 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:08:51.777 02:14:41 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:08:51.777 02:14:41 -- common/autotest_common.sh@1577 -- # bdfs=() 00:08:51.777 02:14:41 -- common/autotest_common.sh@1577 -- # local bdfs 00:08:51.777 02:14:41 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:08:51.777 02:14:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:51.777 02:14:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:51.777 02:14:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:51.777 02:14:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:51.777 02:14:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:51.777 02:14:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:51.777 02:14:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:08:51.777 02:14:42 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:51.777 02:14:42 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:08:51.777 02:14:42 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:08:51.777 02:14:42 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:51.777 02:14:42 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:08:51.777 02:14:42 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:1a:00.0 00:08:51.777 02:14:42 -- common/autotest_common.sh@1592 -- # [[ -z 0000:1a:00.0 ]] 00:08:51.777 02:14:42 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1846299 00:08:51.777 02:14:42 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:51.777 02:14:42 -- common/autotest_common.sh@1598 -- # waitforlisten 1846299 00:08:51.777 02:14:42 -- common/autotest_common.sh@829 -- # '[' -z 1846299 ']' 00:08:51.777 02:14:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.777 02:14:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.777 02:14:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.777 02:14:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.777 02:14:42 -- common/autotest_common.sh@10 -- # set +x 00:08:51.777 [2024-07-11 02:14:42.140225] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:08:51.777 [2024-07-11 02:14:42.140291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846299 ] 00:08:52.037 [2024-07-11 02:14:42.268412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.037 [2024-07-11 02:14:42.320402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.297 02:14:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.297 02:14:42 -- common/autotest_common.sh@862 -- # return 0 00:08:52.297 02:14:42 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:08:52.297 02:14:42 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:08:52.297 02:14:42 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:08:55.591 nvme0n1 00:08:55.591 02:14:45 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:56.214 [2024-07-11 02:14:46.451635] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:56.214 request: 00:08:56.214 { 00:08:56.214 "nvme_ctrlr_name": "nvme0", 00:08:56.214 "password": "test", 00:08:56.214 "method": "bdev_nvme_opal_revert", 00:08:56.214 "req_id": 1 00:08:56.214 } 00:08:56.214 Got JSON-RPC error response 00:08:56.214 response: 00:08:56.214 { 00:08:56.214 "code": -32602, 00:08:56.214 "message": "Invalid parameters" 00:08:56.214 } 00:08:56.214 02:14:46 -- common/autotest_common.sh@1604 -- # true 00:08:56.214 02:14:46 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:08:56.214 02:14:46 -- common/autotest_common.sh@1608 -- # killprocess 1846299 00:08:56.214 02:14:46 -- common/autotest_common.sh@948 -- # '[' -z 1846299 ']' 00:08:56.214 02:14:46 -- common/autotest_common.sh@952 -- # kill -0 1846299 00:08:56.214 02:14:46 -- common/autotest_common.sh@953 -- # uname 00:08:56.214 02:14:46 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.214 02:14:46 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1846299 00:08:56.214 02:14:46 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:56.214 02:14:46 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:56.214 02:14:46 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1846299' 00:08:56.214 killing process with pid 1846299 00:08:56.214 02:14:46 -- common/autotest_common.sh@967 -- # kill 1846299 00:08:56.214 02:14:46 -- common/autotest_common.sh@972 -- # wait 1846299 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.214 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.474 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:56.475 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:09:00.669 02:14:50 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:09:00.669 02:14:50 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:00.669 02:14:50 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:09:00.669 02:14:50 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:09:00.669 02:14:50 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:09:00.929 Restarting all devices. 00:09:05.125 lstat() error: No such file or directory 00:09:05.125 QAT Error: No GENERAL section found 00:09:05.125 Failed to configure qat_dev0 00:09:05.125 lstat() error: No such file or directory 00:09:05.125 QAT Error: No GENERAL section found 00:09:05.125 Failed to configure qat_dev1 00:09:05.125 lstat() error: No such file or directory 00:09:05.125 QAT Error: No GENERAL section found 00:09:05.125 Failed to configure qat_dev2 00:09:05.125 enable sriov 00:09:05.125 Checking status of all devices. 00:09:05.125 There is 3 QAT acceleration device(s) in the system: 00:09:05.125 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:09:05.125 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:09:05.125 qat_dev2 - type: c6xx, inst_id: 2, node_id: 1, bsf: 0000:da:00.0, #accel: 5 #engines: 10 state: down 00:09:05.125 0000:3d:00.0 set to 16 VFs 00:09:06.073 0000:3f:00.0 set to 16 VFs 00:09:06.639 0000:da:00.0 set to 16 VFs 00:09:08.543 Properly configured the qat device with driver uio_pci_generic. 00:09:08.543 02:14:58 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:08.543 02:14:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.543 02:14:58 -- common/autotest_common.sh@10 -- # set +x 00:09:08.543 02:14:58 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:08.543 02:14:58 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:09:08.543 02:14:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.543 02:14:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.543 02:14:58 -- common/autotest_common.sh@10 -- # set +x 00:09:08.543 ************************************ 00:09:08.543 START TEST env 00:09:08.543 ************************************ 00:09:08.543 02:14:58 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:09:08.543 * Looking for test storage... 00:09:08.543 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:09:08.543 02:14:58 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:09:08.543 02:14:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.543 02:14:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.543 02:14:58 env -- common/autotest_common.sh@10 -- # set +x 00:09:08.543 ************************************ 00:09:08.543 START TEST env_memory 00:09:08.543 ************************************ 00:09:08.543 02:14:58 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:09:08.543 00:09:08.543 00:09:08.543 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.543 http://cunit.sourceforge.net/ 00:09:08.543 00:09:08.543 00:09:08.543 Suite: memory 00:09:08.543 Test: alloc and free memory map ...[2024-07-11 02:14:58.729410] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:08.543 passed 00:09:08.543 Test: mem map translation ...[2024-07-11 02:14:58.748641] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:08.543 [2024-07-11 02:14:58.748659] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:08.543 [2024-07-11 02:14:58.748695] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:08.543 [2024-07-11 02:14:58.748703] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:08.543 passed 00:09:08.543 Test: mem map registration ...[2024-07-11 02:14:58.784673] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:08.543 [2024-07-11 02:14:58.784705] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:08.543 passed 00:09:08.543 Test: mem map adjacent registrations ...passed 00:09:08.543 00:09:08.543 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.543 suites 1 1 n/a 0 0 00:09:08.543 tests 4 4 4 0 0 00:09:08.543 asserts 152 152 152 0 n/a 00:09:08.543 00:09:08.543 Elapsed time = 0.133 seconds 00:09:08.543 00:09:08.543 real 0m0.148s 00:09:08.543 user 0m0.137s 00:09:08.543 sys 0m0.011s 00:09:08.543 02:14:58 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.543 02:14:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:08.543 ************************************ 00:09:08.543 END TEST env_memory 00:09:08.543 ************************************ 00:09:08.543 02:14:58 env -- common/autotest_common.sh@1142 -- # return 0 00:09:08.543 02:14:58 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:08.543 02:14:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.543 02:14:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.543 02:14:58 env -- common/autotest_common.sh@10 -- # set +x 00:09:08.543 ************************************ 00:09:08.543 START TEST env_vtophys 00:09:08.543 ************************************ 00:09:08.543 02:14:58 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:08.543 EAL: lib.eal log level changed from notice to debug 00:09:08.543 EAL: Detected lcore 0 as core 0 on socket 0 00:09:08.543 EAL: Detected lcore 1 as core 1 on socket 0 00:09:08.543 EAL: Detected lcore 2 as core 2 on socket 0 00:09:08.543 EAL: Detected lcore 3 as core 3 on socket 0 00:09:08.543 EAL: Detected lcore 4 as core 4 on socket 0 00:09:08.543 EAL: Detected lcore 5 as core 8 on socket 0 00:09:08.543 EAL: Detected lcore 6 as core 9 on socket 0 00:09:08.543 EAL: Detected lcore 7 as core 10 on socket 0 00:09:08.543 EAL: Detected lcore 8 as core 11 on socket 0 00:09:08.543 EAL: Detected lcore 9 as core 16 on socket 0 00:09:08.543 EAL: Detected lcore 10 as core 17 on socket 0 00:09:08.543 EAL: Detected lcore 11 as core 18 on socket 0 00:09:08.543 EAL: Detected lcore 12 as core 19 on socket 0 00:09:08.543 EAL: Detected lcore 13 as core 20 on socket 0 00:09:08.543 EAL: Detected lcore 14 as core 24 on socket 0 00:09:08.543 EAL: Detected lcore 15 as core 25 on socket 0 00:09:08.543 EAL: Detected lcore 16 as core 26 on socket 0 00:09:08.543 EAL: Detected lcore 17 as core 27 on socket 0 00:09:08.543 EAL: Detected lcore 18 as core 0 on socket 1 00:09:08.543 EAL: Detected lcore 19 as core 1 on socket 1 00:09:08.543 EAL: Detected lcore 20 as core 2 on socket 1 00:09:08.543 EAL: Detected lcore 21 as core 3 on socket 1 00:09:08.543 EAL: Detected lcore 22 as core 4 on socket 1 00:09:08.543 EAL: Detected lcore 23 as core 8 on socket 1 00:09:08.543 EAL: Detected lcore 24 as core 9 on socket 1 00:09:08.543 EAL: Detected lcore 25 as core 10 on socket 1 00:09:08.543 EAL: Detected lcore 26 as core 11 on socket 1 00:09:08.543 EAL: Detected lcore 27 as core 16 on socket 1 00:09:08.543 EAL: Detected lcore 28 as core 17 on socket 1 00:09:08.543 EAL: Detected lcore 29 as core 18 on socket 1 00:09:08.543 EAL: Detected lcore 30 as core 19 on socket 1 00:09:08.543 EAL: Detected lcore 31 as core 20 on socket 1 00:09:08.543 EAL: Detected lcore 32 as core 24 on socket 1 00:09:08.543 EAL: Detected lcore 33 as core 25 on socket 1 00:09:08.543 EAL: Detected lcore 34 as core 26 on socket 1 00:09:08.543 EAL: Detected lcore 35 as core 27 on socket 1 00:09:08.543 EAL: Detected lcore 36 as core 0 on socket 0 00:09:08.543 EAL: Detected lcore 37 as core 1 on socket 0 00:09:08.543 EAL: Detected lcore 38 as core 2 on socket 0 00:09:08.543 EAL: Detected lcore 39 as core 3 on socket 0 00:09:08.543 EAL: Detected lcore 40 as core 4 on socket 0 00:09:08.543 EAL: Detected lcore 41 as core 8 on socket 0 00:09:08.543 EAL: Detected lcore 42 as core 9 on socket 0 00:09:08.543 EAL: Detected lcore 43 as core 10 on socket 0 00:09:08.543 EAL: Detected lcore 44 as core 11 on socket 0 00:09:08.543 EAL: Detected lcore 45 as core 16 on socket 0 00:09:08.543 EAL: Detected lcore 46 as core 17 on socket 0 00:09:08.543 EAL: Detected lcore 47 as core 18 on socket 0 00:09:08.543 EAL: Detected lcore 48 as core 19 on socket 0 00:09:08.543 EAL: Detected lcore 49 as core 20 on socket 0 00:09:08.543 EAL: Detected lcore 50 as core 24 on socket 0 00:09:08.543 EAL: Detected lcore 51 as core 25 on socket 0 00:09:08.543 EAL: Detected lcore 52 as core 26 on socket 0 00:09:08.543 EAL: Detected lcore 53 as core 27 on socket 0 00:09:08.543 EAL: Detected lcore 54 as core 0 on socket 1 00:09:08.543 EAL: Detected lcore 55 as core 1 on socket 1 00:09:08.543 EAL: Detected lcore 56 as core 2 on socket 1 00:09:08.543 EAL: Detected lcore 57 as core 3 on socket 1 00:09:08.543 EAL: Detected lcore 58 as core 4 on socket 1 00:09:08.543 EAL: Detected lcore 59 as core 8 on socket 1 00:09:08.543 EAL: Detected lcore 60 as core 9 on socket 1 00:09:08.543 EAL: Detected lcore 61 as core 10 on socket 1 00:09:08.543 EAL: Detected lcore 62 as core 11 on socket 1 00:09:08.543 EAL: Detected lcore 63 as core 16 on socket 1 00:09:08.543 EAL: Detected lcore 64 as core 17 on socket 1 00:09:08.543 EAL: Detected lcore 65 as core 18 on socket 1 00:09:08.543 EAL: Detected lcore 66 as core 19 on socket 1 00:09:08.543 EAL: Detected lcore 67 as core 20 on socket 1 00:09:08.543 EAL: Detected lcore 68 as core 24 on socket 1 00:09:08.543 EAL: Detected lcore 69 as core 25 on socket 1 00:09:08.543 EAL: Detected lcore 70 as core 26 on socket 1 00:09:08.543 EAL: Detected lcore 71 as core 27 on socket 1 00:09:08.543 EAL: Maximum logical cores by configuration: 128 00:09:08.543 EAL: Detected CPU lcores: 72 00:09:08.543 EAL: Detected NUMA nodes: 2 00:09:08.543 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:09:08.543 EAL: Detected shared linkage of DPDK 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:09:08.543 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:09:08.543 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so.23.0 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so 00:09:08.543 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so 00:09:08.544 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so 00:09:08.544 EAL: No shared files mode enabled, IPC will be disabled 00:09:08.805 EAL: No shared files mode enabled, IPC is disabled 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.0 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.1 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.2 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.3 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.4 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.5 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.6 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:01.7 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.0 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.1 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.2 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.3 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.4 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.5 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.6 wants IOVA as 'PA' 00:09:08.805 EAL: PCI driver qat for device 0000:da:02.7 wants IOVA as 'PA' 00:09:08.805 EAL: Bus pci wants IOVA as 'PA' 00:09:08.805 EAL: Bus auxiliary wants IOVA as 'DC' 00:09:08.805 EAL: Bus vdev wants IOVA as 'DC' 00:09:08.805 EAL: Selected IOVA mode 'PA' 00:09:08.805 EAL: Probing VFIO support... 00:09:08.805 EAL: IOMMU type 1 (Type 1) is supported 00:09:08.805 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:08.805 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:08.805 EAL: VFIO support initialized 00:09:08.805 EAL: Ask a virtual area of 0x2e000 bytes 00:09:08.805 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:08.805 EAL: Setting up physically contiguous memory... 00:09:08.805 EAL: Setting maximum number of open files to 524288 00:09:08.805 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:08.805 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:09:08.805 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:08.805 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:09:08.805 EAL: Ask a virtual area of 0x61000 bytes 00:09:08.805 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:09:08.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:08.805 EAL: Ask a virtual area of 0x400000000 bytes 00:09:08.805 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:09:08.805 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:09:08.805 EAL: Hugepages will be freed exactly as allocated. 00:09:08.805 EAL: No shared files mode enabled, IPC is disabled 00:09:08.805 EAL: No shared files mode enabled, IPC is disabled 00:09:08.805 EAL: TSC frequency is ~2300000 KHz 00:09:08.805 EAL: Main lcore 0 is ready (tid=7efd2d05cb00;cpuset=[0]) 00:09:08.805 EAL: Trying to obtain current memory policy. 00:09:08.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.805 EAL: Restoring previous memory policy: 0 00:09:08.805 EAL: request: mp_malloc_sync 00:09:08.805 EAL: No shared files mode enabled, IPC is disabled 00:09:08.805 EAL: Heap on socket 0 was expanded by 2MB 00:09:08.805 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x202001000000 00:09:08.805 EAL: PCI memory mapped at 0x202001001000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x202001002000 00:09:08.805 EAL: PCI memory mapped at 0x202001003000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x202001004000 00:09:08.805 EAL: PCI memory mapped at 0x202001005000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x202001006000 00:09:08.805 EAL: PCI memory mapped at 0x202001007000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x202001008000 00:09:08.805 EAL: PCI memory mapped at 0x202001009000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x20200100a000 00:09:08.805 EAL: PCI memory mapped at 0x20200100b000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x20200100c000 00:09:08.805 EAL: PCI memory mapped at 0x20200100d000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x20200100e000 00:09:08.805 EAL: PCI memory mapped at 0x20200100f000 00:09:08.805 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:09:08.805 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:09:08.805 EAL: probe driver: 8086:37c9 qat 00:09:08.805 EAL: PCI memory mapped at 0x202001010000 00:09:08.805 EAL: PCI memory mapped at 0x202001011000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:09:08.806 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001012000 00:09:08.806 EAL: PCI memory mapped at 0x202001013000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:09:08.806 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001014000 00:09:08.806 EAL: PCI memory mapped at 0x202001015000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:09:08.806 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001016000 00:09:08.806 EAL: PCI memory mapped at 0x202001017000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:09:08.806 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001018000 00:09:08.806 EAL: PCI memory mapped at 0x202001019000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:09:08.806 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200101a000 00:09:08.806 EAL: PCI memory mapped at 0x20200101b000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:09:08.806 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200101c000 00:09:08.806 EAL: PCI memory mapped at 0x20200101d000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:09:08.806 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200101e000 00:09:08.806 EAL: PCI memory mapped at 0x20200101f000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001020000 00:09:08.806 EAL: PCI memory mapped at 0x202001021000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001022000 00:09:08.806 EAL: PCI memory mapped at 0x202001023000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001024000 00:09:08.806 EAL: PCI memory mapped at 0x202001025000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001026000 00:09:08.806 EAL: PCI memory mapped at 0x202001027000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001028000 00:09:08.806 EAL: PCI memory mapped at 0x202001029000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200102a000 00:09:08.806 EAL: PCI memory mapped at 0x20200102b000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200102c000 00:09:08.806 EAL: PCI memory mapped at 0x20200102d000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200102e000 00:09:08.806 EAL: PCI memory mapped at 0x20200102f000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001030000 00:09:08.806 EAL: PCI memory mapped at 0x202001031000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001032000 00:09:08.806 EAL: PCI memory mapped at 0x202001033000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001034000 00:09:08.806 EAL: PCI memory mapped at 0x202001035000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001036000 00:09:08.806 EAL: PCI memory mapped at 0x202001037000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001038000 00:09:08.806 EAL: PCI memory mapped at 0x202001039000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200103a000 00:09:08.806 EAL: PCI memory mapped at 0x20200103b000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200103c000 00:09:08.806 EAL: PCI memory mapped at 0x20200103d000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:09:08.806 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200103e000 00:09:08.806 EAL: PCI memory mapped at 0x20200103f000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:09:08.806 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37d2 net_i40e 00:09:08.806 EAL: Not managed by a supported kernel driver, skipped 00:09:08.806 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:09:08.806 EAL: probe driver: 8086:37d2 net_i40e 00:09:08.806 EAL: Not managed by a supported kernel driver, skipped 00:09:08.806 EAL: PCI device 0000:da:01.0 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001040000 00:09:08.806 EAL: PCI memory mapped at 0x202001041000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.0 (socket 1) 00:09:08.806 EAL: Trying to obtain current memory policy. 00:09:08.806 EAL: Setting policy MPOL_PREFERRED for socket 1 00:09:08.806 EAL: Restoring previous memory policy: 4 00:09:08.806 EAL: request: mp_malloc_sync 00:09:08.806 EAL: No shared files mode enabled, IPC is disabled 00:09:08.806 EAL: Heap on socket 1 was expanded by 2MB 00:09:08.806 EAL: PCI device 0000:da:01.1 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001042000 00:09:08.806 EAL: PCI memory mapped at 0x202001043000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.1 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:01.2 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001044000 00:09:08.806 EAL: PCI memory mapped at 0x202001045000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.2 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:01.3 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001046000 00:09:08.806 EAL: PCI memory mapped at 0x202001047000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.3 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:01.4 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001048000 00:09:08.806 EAL: PCI memory mapped at 0x202001049000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.4 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:01.5 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200104a000 00:09:08.806 EAL: PCI memory mapped at 0x20200104b000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.5 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:01.6 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200104c000 00:09:08.806 EAL: PCI memory mapped at 0x20200104d000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.6 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:01.7 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x20200104e000 00:09:08.806 EAL: PCI memory mapped at 0x20200104f000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.7 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:02.0 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001050000 00:09:08.806 EAL: PCI memory mapped at 0x202001051000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.0 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:02.1 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001052000 00:09:08.806 EAL: PCI memory mapped at 0x202001053000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.1 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:02.2 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001054000 00:09:08.806 EAL: PCI memory mapped at 0x202001055000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.2 (socket 1) 00:09:08.806 EAL: PCI device 0000:da:02.3 on NUMA socket 1 00:09:08.806 EAL: probe driver: 8086:37c9 qat 00:09:08.806 EAL: PCI memory mapped at 0x202001056000 00:09:08.806 EAL: PCI memory mapped at 0x202001057000 00:09:08.806 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.3 (socket 1) 00:09:08.807 EAL: PCI device 0000:da:02.4 on NUMA socket 1 00:09:08.807 EAL: probe driver: 8086:37c9 qat 00:09:08.807 EAL: PCI memory mapped at 0x202001058000 00:09:08.807 EAL: PCI memory mapped at 0x202001059000 00:09:08.807 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.4 (socket 1) 00:09:08.807 EAL: PCI device 0000:da:02.5 on NUMA socket 1 00:09:08.807 EAL: probe driver: 8086:37c9 qat 00:09:08.807 EAL: PCI memory mapped at 0x20200105a000 00:09:08.807 EAL: PCI memory mapped at 0x20200105b000 00:09:08.807 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.5 (socket 1) 00:09:08.807 EAL: PCI device 0000:da:02.6 on NUMA socket 1 00:09:08.807 EAL: probe driver: 8086:37c9 qat 00:09:08.807 EAL: PCI memory mapped at 0x20200105c000 00:09:08.807 EAL: PCI memory mapped at 0x20200105d000 00:09:08.807 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.6 (socket 1) 00:09:08.807 EAL: PCI device 0000:da:02.7 on NUMA socket 1 00:09:08.807 EAL: probe driver: 8086:37c9 qat 00:09:08.807 EAL: PCI memory mapped at 0x20200105e000 00:09:08.807 EAL: PCI memory mapped at 0x20200105f000 00:09:08.807 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.7 (socket 1) 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:08.807 EAL: Mem event callback 'spdk:(nil)' registered 00:09:08.807 00:09:08.807 00:09:08.807 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.807 http://cunit.sourceforge.net/ 00:09:08.807 00:09:08.807 00:09:08.807 Suite: components_suite 00:09:08.807 Test: vtophys_malloc_test ...passed 00:09:08.807 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:08.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.807 EAL: Restoring previous memory policy: 4 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was expanded by 4MB 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was shrunk by 4MB 00:09:08.807 EAL: Trying to obtain current memory policy. 00:09:08.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.807 EAL: Restoring previous memory policy: 4 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was expanded by 6MB 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was shrunk by 6MB 00:09:08.807 EAL: Trying to obtain current memory policy. 00:09:08.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.807 EAL: Restoring previous memory policy: 4 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was expanded by 10MB 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was shrunk by 10MB 00:09:08.807 EAL: Trying to obtain current memory policy. 00:09:08.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.807 EAL: Restoring previous memory policy: 4 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was expanded by 18MB 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was shrunk by 18MB 00:09:08.807 EAL: Trying to obtain current memory policy. 00:09:08.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.807 EAL: Restoring previous memory policy: 4 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was expanded by 34MB 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was shrunk by 34MB 00:09:08.807 EAL: Trying to obtain current memory policy. 00:09:08.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.807 EAL: Restoring previous memory policy: 4 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was expanded by 66MB 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was shrunk by 66MB 00:09:08.807 EAL: Trying to obtain current memory policy. 00:09:08.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:08.807 EAL: Restoring previous memory policy: 4 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.807 EAL: request: mp_malloc_sync 00:09:08.807 EAL: No shared files mode enabled, IPC is disabled 00:09:08.807 EAL: Heap on socket 0 was expanded by 130MB 00:09:08.807 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.067 EAL: request: mp_malloc_sync 00:09:09.067 EAL: No shared files mode enabled, IPC is disabled 00:09:09.067 EAL: Heap on socket 0 was shrunk by 130MB 00:09:09.067 EAL: Trying to obtain current memory policy. 00:09:09.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.067 EAL: Restoring previous memory policy: 4 00:09:09.067 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.067 EAL: request: mp_malloc_sync 00:09:09.067 EAL: No shared files mode enabled, IPC is disabled 00:09:09.067 EAL: Heap on socket 0 was expanded by 258MB 00:09:09.067 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.067 EAL: request: mp_malloc_sync 00:09:09.067 EAL: No shared files mode enabled, IPC is disabled 00:09:09.067 EAL: Heap on socket 0 was shrunk by 258MB 00:09:09.067 EAL: Trying to obtain current memory policy. 00:09:09.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.326 EAL: Restoring previous memory policy: 4 00:09:09.326 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.326 EAL: request: mp_malloc_sync 00:09:09.326 EAL: No shared files mode enabled, IPC is disabled 00:09:09.326 EAL: Heap on socket 0 was expanded by 514MB 00:09:09.326 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.326 EAL: request: mp_malloc_sync 00:09:09.326 EAL: No shared files mode enabled, IPC is disabled 00:09:09.326 EAL: Heap on socket 0 was shrunk by 514MB 00:09:09.326 EAL: Trying to obtain current memory policy. 00:09:09.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:09.585 EAL: Restoring previous memory policy: 4 00:09:09.585 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.585 EAL: request: mp_malloc_sync 00:09:09.585 EAL: No shared files mode enabled, IPC is disabled 00:09:09.585 EAL: Heap on socket 0 was expanded by 1026MB 00:09:09.845 EAL: Calling mem event callback 'spdk:(nil)' 00:09:09.845 EAL: request: mp_malloc_sync 00:09:09.845 EAL: No shared files mode enabled, IPC is disabled 00:09:09.845 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:09.845 passed 00:09:09.845 00:09:09.845 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.845 suites 1 1 n/a 0 0 00:09:09.845 tests 2 2 2 0 0 00:09:09.845 asserts 6583 6583 6583 0 n/a 00:09:09.845 00:09:09.845 Elapsed time = 1.150 seconds 00:09:10.104 EAL: No shared files mode enabled, IPC is disabled 00:09:10.104 EAL: No shared files mode enabled, IPC is disabled 00:09:10.104 EAL: No shared files mode enabled, IPC is disabled 00:09:10.104 00:09:10.104 real 0m1.357s 00:09:10.104 user 0m0.741s 00:09:10.105 sys 0m0.585s 00:09:10.105 02:15:00 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.105 02:15:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:10.105 ************************************ 00:09:10.105 END TEST env_vtophys 00:09:10.105 ************************************ 00:09:10.105 02:15:00 env -- common/autotest_common.sh@1142 -- # return 0 00:09:10.105 02:15:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:09:10.105 02:15:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.105 02:15:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.105 02:15:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:10.105 ************************************ 00:09:10.105 START TEST env_pci 00:09:10.105 ************************************ 00:09:10.105 02:15:00 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:09:10.105 00:09:10.105 00:09:10.105 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.105 http://cunit.sourceforge.net/ 00:09:10.105 00:09:10.105 00:09:10.105 Suite: pci 00:09:10.105 Test: pci_hook ...[2024-07-11 02:15:00.385915] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1848947 has claimed it 00:09:10.105 EAL: Cannot find device (10000:00:01.0) 00:09:10.105 EAL: Failed to attach device on primary process 00:09:10.105 passed 00:09:10.105 00:09:10.105 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.105 suites 1 1 n/a 0 0 00:09:10.105 tests 1 1 1 0 0 00:09:10.105 asserts 25 25 25 0 n/a 00:09:10.105 00:09:10.105 Elapsed time = 0.037 seconds 00:09:10.105 00:09:10.105 real 0m0.064s 00:09:10.105 user 0m0.016s 00:09:10.105 sys 0m0.048s 00:09:10.105 02:15:00 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.105 02:15:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:10.105 ************************************ 00:09:10.105 END TEST env_pci 00:09:10.105 ************************************ 00:09:10.105 02:15:00 env -- common/autotest_common.sh@1142 -- # return 0 00:09:10.105 02:15:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:10.105 02:15:00 env -- env/env.sh@15 -- # uname 00:09:10.105 02:15:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:10.105 02:15:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:10.105 02:15:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:10.105 02:15:00 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:10.105 02:15:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.105 02:15:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:10.105 ************************************ 00:09:10.105 START TEST env_dpdk_post_init 00:09:10.105 ************************************ 00:09:10.105 02:15:00 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:10.365 EAL: Detected CPU lcores: 72 00:09:10.365 EAL: Detected NUMA nodes: 2 00:09:10.365 EAL: Detected shared linkage of DPDK 00:09:10.365 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:10.365 EAL: Selected IOVA mode 'PA' 00:09:10.365 EAL: VFIO support initialized 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.365 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:09:10.365 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.0 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.1 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.2 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.3 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.4 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.5 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.6 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.7 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.0 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.1 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.2 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.3 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.4 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.5 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.6 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_asym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.7 (socket 1) 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_sym 00:09:10.366 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:09:10.366 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_asym 00:09:10.367 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:09:10.367 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:10.367 EAL: Using IOMMU type 1 (Type 1) 00:09:10.367 EAL: Ignore mapping IO port bar(1) 00:09:10.367 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:09:10.367 EAL: Ignore mapping IO port bar(1) 00:09:10.367 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:09:10.367 EAL: Ignore mapping IO port bar(1) 00:09:10.367 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:09:10.367 EAL: Ignore mapping IO port bar(1) 00:09:10.367 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:09:10.626 EAL: Ignore mapping IO port bar(1) 00:09:10.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:09:10.626 EAL: Ignore mapping IO port bar(1) 00:09:10.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:09:10.626 EAL: Ignore mapping IO port bar(1) 00:09:10.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:09:10.626 EAL: Ignore mapping IO port bar(1) 00:09:10.626 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:09:11.196 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:09:11.196 EAL: Ignore mapping IO port bar(1) 00:09:11.196 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:09:11.196 EAL: Ignore mapping IO port bar(1) 00:09:11.196 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:09:11.456 EAL: Ignore mapping IO port bar(1) 00:09:11.456 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:09:11.456 EAL: Ignore mapping IO port bar(1) 00:09:11.456 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:09:11.456 EAL: Ignore mapping IO port bar(1) 00:09:11.456 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:09:11.456 EAL: Ignore mapping IO port bar(1) 00:09:11.456 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:09:11.456 EAL: Ignore mapping IO port bar(1) 00:09:11.456 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:09:11.456 EAL: Ignore mapping IO port bar(1) 00:09:11.456 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:09:16.736 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:09:16.736 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001080000 00:09:16.996 Starting DPDK initialization... 00:09:16.996 Starting SPDK post initialization... 00:09:16.996 SPDK NVMe probe 00:09:16.996 Attaching to 0000:1a:00.0 00:09:16.996 Attached to 0000:1a:00.0 00:09:16.996 Cleaning up... 00:09:16.996 00:09:16.996 real 0m6.770s 00:09:16.996 user 0m5.094s 00:09:16.996 sys 0m0.739s 00:09:16.996 02:15:07 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.996 02:15:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:16.996 ************************************ 00:09:16.996 END TEST env_dpdk_post_init 00:09:16.996 ************************************ 00:09:16.996 02:15:07 env -- common/autotest_common.sh@1142 -- # return 0 00:09:16.996 02:15:07 env -- env/env.sh@26 -- # uname 00:09:16.996 02:15:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:16.996 02:15:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:16.996 02:15:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:16.996 02:15:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.996 02:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:09:16.996 ************************************ 00:09:16.996 START TEST env_mem_callbacks 00:09:16.996 ************************************ 00:09:16.996 02:15:07 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:16.996 EAL: Detected CPU lcores: 72 00:09:16.996 EAL: Detected NUMA nodes: 2 00:09:16.996 EAL: Detected shared linkage of DPDK 00:09:16.996 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:17.256 EAL: Selected IOVA mode 'PA' 00:09:17.256 EAL: VFIO support initialized 00:09:17.256 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:09:17.256 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:09:17.256 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.256 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:09:17.256 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.256 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:09:17.256 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:09:17.256 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.256 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:09:17.256 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.257 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:09:17.257 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:09:17.257 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.0 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.1 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.2 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.3 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.4 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.5 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.6 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.7 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.0 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.1 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.2 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.3 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.4 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.5 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.6 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.7 (socket 1) 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_sym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:09:17.258 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_asym 00:09:17.258 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:09:17.258 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:17.258 00:09:17.258 00:09:17.258 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.258 http://cunit.sourceforge.net/ 00:09:17.258 00:09:17.258 00:09:17.258 Suite: memory 00:09:17.258 Test: test ... 00:09:17.258 register 0x200000200000 2097152 00:09:17.258 register 0x201000a00000 2097152 00:09:17.258 malloc 3145728 00:09:17.258 register 0x200000400000 4194304 00:09:17.258 buf 0x200000500000 len 3145728 PASSED 00:09:17.258 malloc 64 00:09:17.258 buf 0x2000004fff40 len 64 PASSED 00:09:17.258 malloc 4194304 00:09:17.258 register 0x200000800000 6291456 00:09:17.258 buf 0x200000a00000 len 4194304 PASSED 00:09:17.258 free 0x200000500000 3145728 00:09:17.258 free 0x2000004fff40 64 00:09:17.258 unregister 0x200000400000 4194304 PASSED 00:09:17.258 free 0x200000a00000 4194304 00:09:17.258 unregister 0x200000800000 6291456 PASSED 00:09:17.258 malloc 8388608 00:09:17.258 register 0x200000400000 10485760 00:09:17.258 buf 0x200000600000 len 8388608 PASSED 00:09:17.258 free 0x200000600000 8388608 00:09:17.258 unregister 0x200000400000 10485760 PASSED 00:09:17.258 passed 00:09:17.258 00:09:17.258 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.258 suites 1 1 n/a 0 0 00:09:17.258 tests 1 1 1 0 0 00:09:17.258 asserts 16 16 16 0 n/a 00:09:17.258 00:09:17.258 Elapsed time = 0.007 seconds 00:09:17.258 00:09:17.258 real 0m0.111s 00:09:17.258 user 0m0.031s 00:09:17.258 sys 0m0.079s 00:09:17.258 02:15:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.258 02:15:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:17.258 ************************************ 00:09:17.258 END TEST env_mem_callbacks 00:09:17.258 ************************************ 00:09:17.258 02:15:07 env -- common/autotest_common.sh@1142 -- # return 0 00:09:17.258 00:09:17.258 real 0m9.003s 00:09:17.258 user 0m6.220s 00:09:17.258 sys 0m1.857s 00:09:17.258 02:15:07 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.258 02:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:09:17.258 ************************************ 00:09:17.258 END TEST env 00:09:17.258 ************************************ 00:09:17.258 02:15:07 -- common/autotest_common.sh@1142 -- # return 0 00:09:17.258 02:15:07 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:09:17.258 02:15:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:17.258 02:15:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.258 02:15:07 -- common/autotest_common.sh@10 -- # set +x 00:09:17.258 ************************************ 00:09:17.258 START TEST rpc 00:09:17.258 ************************************ 00:09:17.258 02:15:07 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:09:17.518 * Looking for test storage... 00:09:17.518 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:09:17.518 02:15:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1850482 00:09:17.518 02:15:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:17.518 02:15:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1850482 00:09:17.518 02:15:07 rpc -- common/autotest_common.sh@829 -- # '[' -z 1850482 ']' 00:09:17.518 02:15:07 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.518 02:15:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:09:17.518 02:15:07 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.518 02:15:07 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.518 02:15:07 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.518 02:15:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.518 [2024-07-11 02:15:07.812020] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:09:17.518 [2024-07-11 02:15:07.812088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850482 ] 00:09:17.777 [2024-07-11 02:15:07.949406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.777 [2024-07-11 02:15:07.999923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:17.777 [2024-07-11 02:15:07.999970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1850482' to capture a snapshot of events at runtime. 00:09:17.777 [2024-07-11 02:15:07.999984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.777 [2024-07-11 02:15:07.999998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.777 [2024-07-11 02:15:08.000009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1850482 for offline analysis/debug. 00:09:17.777 [2024-07-11 02:15:08.000039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.035 02:15:08 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.035 02:15:08 rpc -- common/autotest_common.sh@862 -- # return 0 00:09:18.035 02:15:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:09:18.035 02:15:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:09:18.035 02:15:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:18.035 02:15:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:18.035 02:15:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.035 02:15:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.035 02:15:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.035 ************************************ 00:09:18.035 START TEST rpc_integrity 00:09:18.035 ************************************ 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:18.035 { 00:09:18.035 "name": "Malloc0", 00:09:18.035 "aliases": [ 00:09:18.035 "9fedb4f2-acc2-4114-b7c7-238b6de2ab48" 00:09:18.035 ], 00:09:18.035 "product_name": "Malloc disk", 00:09:18.035 "block_size": 512, 00:09:18.035 "num_blocks": 16384, 00:09:18.035 "uuid": "9fedb4f2-acc2-4114-b7c7-238b6de2ab48", 00:09:18.035 "assigned_rate_limits": { 00:09:18.035 "rw_ios_per_sec": 0, 00:09:18.035 "rw_mbytes_per_sec": 0, 00:09:18.035 "r_mbytes_per_sec": 0, 00:09:18.035 "w_mbytes_per_sec": 0 00:09:18.035 }, 00:09:18.035 "claimed": false, 00:09:18.035 "zoned": false, 00:09:18.035 "supported_io_types": { 00:09:18.035 "read": true, 00:09:18.035 "write": true, 00:09:18.035 "unmap": true, 00:09:18.035 "flush": true, 00:09:18.035 "reset": true, 00:09:18.035 "nvme_admin": false, 00:09:18.035 "nvme_io": false, 00:09:18.035 "nvme_io_md": false, 00:09:18.035 "write_zeroes": true, 00:09:18.035 "zcopy": true, 00:09:18.035 "get_zone_info": false, 00:09:18.035 "zone_management": false, 00:09:18.035 "zone_append": false, 00:09:18.035 "compare": false, 00:09:18.035 "compare_and_write": false, 00:09:18.035 "abort": true, 00:09:18.035 "seek_hole": false, 00:09:18.035 "seek_data": false, 00:09:18.035 "copy": true, 00:09:18.035 "nvme_iov_md": false 00:09:18.035 }, 00:09:18.035 "memory_domains": [ 00:09:18.035 { 00:09:18.035 "dma_device_id": "system", 00:09:18.035 "dma_device_type": 1 00:09:18.035 }, 00:09:18.035 { 00:09:18.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.035 "dma_device_type": 2 00:09:18.035 } 00:09:18.035 ], 00:09:18.035 "driver_specific": {} 00:09:18.035 } 00:09:18.035 ]' 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.035 [2024-07-11 02:15:08.434220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:18.035 [2024-07-11 02:15:08.434261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.035 [2024-07-11 02:15:08.434281] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1047570 00:09:18.035 [2024-07-11 02:15:08.434299] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.035 [2024-07-11 02:15:08.435820] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.035 [2024-07-11 02:15:08.435849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:18.035 Passthru0 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.035 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.035 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:18.295 { 00:09:18.295 "name": "Malloc0", 00:09:18.295 "aliases": [ 00:09:18.295 "9fedb4f2-acc2-4114-b7c7-238b6de2ab48" 00:09:18.295 ], 00:09:18.295 "product_name": "Malloc disk", 00:09:18.295 "block_size": 512, 00:09:18.295 "num_blocks": 16384, 00:09:18.295 "uuid": "9fedb4f2-acc2-4114-b7c7-238b6de2ab48", 00:09:18.295 "assigned_rate_limits": { 00:09:18.295 "rw_ios_per_sec": 0, 00:09:18.295 "rw_mbytes_per_sec": 0, 00:09:18.295 "r_mbytes_per_sec": 0, 00:09:18.295 "w_mbytes_per_sec": 0 00:09:18.295 }, 00:09:18.295 "claimed": true, 00:09:18.295 "claim_type": "exclusive_write", 00:09:18.295 "zoned": false, 00:09:18.295 "supported_io_types": { 00:09:18.295 "read": true, 00:09:18.295 "write": true, 00:09:18.295 "unmap": true, 00:09:18.295 "flush": true, 00:09:18.295 "reset": true, 00:09:18.295 "nvme_admin": false, 00:09:18.295 "nvme_io": false, 00:09:18.295 "nvme_io_md": false, 00:09:18.295 "write_zeroes": true, 00:09:18.295 "zcopy": true, 00:09:18.295 "get_zone_info": false, 00:09:18.295 "zone_management": false, 00:09:18.295 "zone_append": false, 00:09:18.295 "compare": false, 00:09:18.295 "compare_and_write": false, 00:09:18.295 "abort": true, 00:09:18.295 "seek_hole": false, 00:09:18.295 "seek_data": false, 00:09:18.295 "copy": true, 00:09:18.295 "nvme_iov_md": false 00:09:18.295 }, 00:09:18.295 "memory_domains": [ 00:09:18.295 { 00:09:18.295 "dma_device_id": "system", 00:09:18.295 "dma_device_type": 1 00:09:18.295 }, 00:09:18.295 { 00:09:18.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.295 "dma_device_type": 2 00:09:18.295 } 00:09:18.295 ], 00:09:18.295 "driver_specific": {} 00:09:18.295 }, 00:09:18.295 { 00:09:18.295 "name": "Passthru0", 00:09:18.295 "aliases": [ 00:09:18.295 "fc8a6882-765a-5437-b02a-c23ad3fb9af0" 00:09:18.295 ], 00:09:18.295 "product_name": "passthru", 00:09:18.295 "block_size": 512, 00:09:18.295 "num_blocks": 16384, 00:09:18.295 "uuid": "fc8a6882-765a-5437-b02a-c23ad3fb9af0", 00:09:18.295 "assigned_rate_limits": { 00:09:18.295 "rw_ios_per_sec": 0, 00:09:18.295 "rw_mbytes_per_sec": 0, 00:09:18.295 "r_mbytes_per_sec": 0, 00:09:18.295 "w_mbytes_per_sec": 0 00:09:18.295 }, 00:09:18.295 "claimed": false, 00:09:18.295 "zoned": false, 00:09:18.295 "supported_io_types": { 00:09:18.295 "read": true, 00:09:18.295 "write": true, 00:09:18.295 "unmap": true, 00:09:18.295 "flush": true, 00:09:18.295 "reset": true, 00:09:18.295 "nvme_admin": false, 00:09:18.295 "nvme_io": false, 00:09:18.295 "nvme_io_md": false, 00:09:18.295 "write_zeroes": true, 00:09:18.295 "zcopy": true, 00:09:18.295 "get_zone_info": false, 00:09:18.295 "zone_management": false, 00:09:18.295 "zone_append": false, 00:09:18.295 "compare": false, 00:09:18.295 "compare_and_write": false, 00:09:18.295 "abort": true, 00:09:18.295 "seek_hole": false, 00:09:18.295 "seek_data": false, 00:09:18.295 "copy": true, 00:09:18.295 "nvme_iov_md": false 00:09:18.295 }, 00:09:18.295 "memory_domains": [ 00:09:18.295 { 00:09:18.295 "dma_device_id": "system", 00:09:18.295 "dma_device_type": 1 00:09:18.295 }, 00:09:18.295 { 00:09:18.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.295 "dma_device_type": 2 00:09:18.295 } 00:09:18.295 ], 00:09:18.295 "driver_specific": { 00:09:18.295 "passthru": { 00:09:18.295 "name": "Passthru0", 00:09:18.295 "base_bdev_name": "Malloc0" 00:09:18.295 } 00:09:18.295 } 00:09:18.295 } 00:09:18.295 ]' 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:18.295 02:15:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:18.295 00:09:18.295 real 0m0.308s 00:09:18.295 user 0m0.200s 00:09:18.295 sys 0m0.046s 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.295 02:15:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 ************************************ 00:09:18.295 END TEST rpc_integrity 00:09:18.295 ************************************ 00:09:18.295 02:15:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:18.295 02:15:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:18.295 02:15:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.295 02:15:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.295 02:15:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 ************************************ 00:09:18.295 START TEST rpc_plugins 00:09:18.295 ************************************ 00:09:18.295 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:09:18.295 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:18.295 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.295 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.295 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:18.295 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:18.295 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.295 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:18.555 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.555 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:18.555 { 00:09:18.555 "name": "Malloc1", 00:09:18.555 "aliases": [ 00:09:18.555 "155d301f-ae0b-4399-b82d-caaa98e49b5e" 00:09:18.555 ], 00:09:18.555 "product_name": "Malloc disk", 00:09:18.555 "block_size": 4096, 00:09:18.555 "num_blocks": 256, 00:09:18.555 "uuid": "155d301f-ae0b-4399-b82d-caaa98e49b5e", 00:09:18.555 "assigned_rate_limits": { 00:09:18.555 "rw_ios_per_sec": 0, 00:09:18.555 "rw_mbytes_per_sec": 0, 00:09:18.555 "r_mbytes_per_sec": 0, 00:09:18.555 "w_mbytes_per_sec": 0 00:09:18.555 }, 00:09:18.555 "claimed": false, 00:09:18.555 "zoned": false, 00:09:18.555 "supported_io_types": { 00:09:18.555 "read": true, 00:09:18.555 "write": true, 00:09:18.555 "unmap": true, 00:09:18.555 "flush": true, 00:09:18.555 "reset": true, 00:09:18.555 "nvme_admin": false, 00:09:18.555 "nvme_io": false, 00:09:18.555 "nvme_io_md": false, 00:09:18.555 "write_zeroes": true, 00:09:18.555 "zcopy": true, 00:09:18.555 "get_zone_info": false, 00:09:18.555 "zone_management": false, 00:09:18.555 "zone_append": false, 00:09:18.555 "compare": false, 00:09:18.555 "compare_and_write": false, 00:09:18.555 "abort": true, 00:09:18.555 "seek_hole": false, 00:09:18.555 "seek_data": false, 00:09:18.555 "copy": true, 00:09:18.555 "nvme_iov_md": false 00:09:18.555 }, 00:09:18.556 "memory_domains": [ 00:09:18.556 { 00:09:18.556 "dma_device_id": "system", 00:09:18.556 "dma_device_type": 1 00:09:18.556 }, 00:09:18.556 { 00:09:18.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.556 "dma_device_type": 2 00:09:18.556 } 00:09:18.556 ], 00:09:18.556 "driver_specific": {} 00:09:18.556 } 00:09:18.556 ]' 00:09:18.556 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:18.556 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:18.556 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.556 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.556 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:18.556 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:18.556 02:15:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:18.556 00:09:18.556 real 0m0.162s 00:09:18.556 user 0m0.100s 00:09:18.556 sys 0m0.027s 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.556 02:15:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:18.556 ************************************ 00:09:18.556 END TEST rpc_plugins 00:09:18.556 ************************************ 00:09:18.556 02:15:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:18.556 02:15:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:18.556 02:15:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.556 02:15:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.556 02:15:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.556 ************************************ 00:09:18.556 START TEST rpc_trace_cmd_test 00:09:18.556 ************************************ 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:18.556 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1850482", 00:09:18.556 "tpoint_group_mask": "0x8", 00:09:18.556 "iscsi_conn": { 00:09:18.556 "mask": "0x2", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "scsi": { 00:09:18.556 "mask": "0x4", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "bdev": { 00:09:18.556 "mask": "0x8", 00:09:18.556 "tpoint_mask": "0xffffffffffffffff" 00:09:18.556 }, 00:09:18.556 "nvmf_rdma": { 00:09:18.556 "mask": "0x10", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "nvmf_tcp": { 00:09:18.556 "mask": "0x20", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "ftl": { 00:09:18.556 "mask": "0x40", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "blobfs": { 00:09:18.556 "mask": "0x80", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "dsa": { 00:09:18.556 "mask": "0x200", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "thread": { 00:09:18.556 "mask": "0x400", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "nvme_pcie": { 00:09:18.556 "mask": "0x800", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "iaa": { 00:09:18.556 "mask": "0x1000", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "nvme_tcp": { 00:09:18.556 "mask": "0x2000", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "bdev_nvme": { 00:09:18.556 "mask": "0x4000", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 }, 00:09:18.556 "sock": { 00:09:18.556 "mask": "0x8000", 00:09:18.556 "tpoint_mask": "0x0" 00:09:18.556 } 00:09:18.556 }' 00:09:18.556 02:15:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:18.816 00:09:18.816 real 0m0.253s 00:09:18.816 user 0m0.201s 00:09:18.816 sys 0m0.042s 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.816 02:15:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.816 ************************************ 00:09:18.816 END TEST rpc_trace_cmd_test 00:09:18.816 ************************************ 00:09:18.816 02:15:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:18.816 02:15:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:18.816 02:15:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:18.816 02:15:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:18.816 02:15:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.816 02:15:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.816 02:15:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 ************************************ 00:09:19.076 START TEST rpc_daemon_integrity 00:09:19.076 ************************************ 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:19.076 { 00:09:19.076 "name": "Malloc2", 00:09:19.076 "aliases": [ 00:09:19.076 "6ffdfa59-ab17-4e8d-8f76-69adf693fc02" 00:09:19.076 ], 00:09:19.076 "product_name": "Malloc disk", 00:09:19.076 "block_size": 512, 00:09:19.076 "num_blocks": 16384, 00:09:19.076 "uuid": "6ffdfa59-ab17-4e8d-8f76-69adf693fc02", 00:09:19.076 "assigned_rate_limits": { 00:09:19.076 "rw_ios_per_sec": 0, 00:09:19.076 "rw_mbytes_per_sec": 0, 00:09:19.076 "r_mbytes_per_sec": 0, 00:09:19.076 "w_mbytes_per_sec": 0 00:09:19.076 }, 00:09:19.076 "claimed": false, 00:09:19.076 "zoned": false, 00:09:19.076 "supported_io_types": { 00:09:19.076 "read": true, 00:09:19.076 "write": true, 00:09:19.076 "unmap": true, 00:09:19.076 "flush": true, 00:09:19.076 "reset": true, 00:09:19.076 "nvme_admin": false, 00:09:19.076 "nvme_io": false, 00:09:19.076 "nvme_io_md": false, 00:09:19.076 "write_zeroes": true, 00:09:19.076 "zcopy": true, 00:09:19.076 "get_zone_info": false, 00:09:19.076 "zone_management": false, 00:09:19.076 "zone_append": false, 00:09:19.076 "compare": false, 00:09:19.076 "compare_and_write": false, 00:09:19.076 "abort": true, 00:09:19.076 "seek_hole": false, 00:09:19.076 "seek_data": false, 00:09:19.076 "copy": true, 00:09:19.076 "nvme_iov_md": false 00:09:19.076 }, 00:09:19.076 "memory_domains": [ 00:09:19.076 { 00:09:19.076 "dma_device_id": "system", 00:09:19.076 "dma_device_type": 1 00:09:19.076 }, 00:09:19.076 { 00:09:19.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.076 "dma_device_type": 2 00:09:19.076 } 00:09:19.076 ], 00:09:19.076 "driver_specific": {} 00:09:19.076 } 00:09:19.076 ]' 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 [2024-07-11 02:15:09.417016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:19.076 [2024-07-11 02:15:09.417052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.076 [2024-07-11 02:15:09.417070] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1049090 00:09:19.076 [2024-07-11 02:15:09.417082] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.076 [2024-07-11 02:15:09.418419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.076 [2024-07-11 02:15:09.418445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:19.076 Passthru0 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:19.076 { 00:09:19.076 "name": "Malloc2", 00:09:19.076 "aliases": [ 00:09:19.076 "6ffdfa59-ab17-4e8d-8f76-69adf693fc02" 00:09:19.076 ], 00:09:19.076 "product_name": "Malloc disk", 00:09:19.076 "block_size": 512, 00:09:19.076 "num_blocks": 16384, 00:09:19.076 "uuid": "6ffdfa59-ab17-4e8d-8f76-69adf693fc02", 00:09:19.076 "assigned_rate_limits": { 00:09:19.076 "rw_ios_per_sec": 0, 00:09:19.076 "rw_mbytes_per_sec": 0, 00:09:19.076 "r_mbytes_per_sec": 0, 00:09:19.076 "w_mbytes_per_sec": 0 00:09:19.076 }, 00:09:19.076 "claimed": true, 00:09:19.076 "claim_type": "exclusive_write", 00:09:19.076 "zoned": false, 00:09:19.076 "supported_io_types": { 00:09:19.076 "read": true, 00:09:19.076 "write": true, 00:09:19.076 "unmap": true, 00:09:19.076 "flush": true, 00:09:19.076 "reset": true, 00:09:19.076 "nvme_admin": false, 00:09:19.076 "nvme_io": false, 00:09:19.076 "nvme_io_md": false, 00:09:19.076 "write_zeroes": true, 00:09:19.076 "zcopy": true, 00:09:19.076 "get_zone_info": false, 00:09:19.076 "zone_management": false, 00:09:19.076 "zone_append": false, 00:09:19.076 "compare": false, 00:09:19.076 "compare_and_write": false, 00:09:19.076 "abort": true, 00:09:19.076 "seek_hole": false, 00:09:19.076 "seek_data": false, 00:09:19.076 "copy": true, 00:09:19.076 "nvme_iov_md": false 00:09:19.076 }, 00:09:19.076 "memory_domains": [ 00:09:19.076 { 00:09:19.076 "dma_device_id": "system", 00:09:19.076 "dma_device_type": 1 00:09:19.076 }, 00:09:19.076 { 00:09:19.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.076 "dma_device_type": 2 00:09:19.076 } 00:09:19.076 ], 00:09:19.076 "driver_specific": {} 00:09:19.076 }, 00:09:19.076 { 00:09:19.076 "name": "Passthru0", 00:09:19.076 "aliases": [ 00:09:19.076 "9d1f77e5-5a51-5cfd-86e1-3ade9a8b1b7d" 00:09:19.076 ], 00:09:19.076 "product_name": "passthru", 00:09:19.076 "block_size": 512, 00:09:19.076 "num_blocks": 16384, 00:09:19.076 "uuid": "9d1f77e5-5a51-5cfd-86e1-3ade9a8b1b7d", 00:09:19.076 "assigned_rate_limits": { 00:09:19.076 "rw_ios_per_sec": 0, 00:09:19.076 "rw_mbytes_per_sec": 0, 00:09:19.076 "r_mbytes_per_sec": 0, 00:09:19.076 "w_mbytes_per_sec": 0 00:09:19.076 }, 00:09:19.076 "claimed": false, 00:09:19.076 "zoned": false, 00:09:19.076 "supported_io_types": { 00:09:19.076 "read": true, 00:09:19.076 "write": true, 00:09:19.076 "unmap": true, 00:09:19.076 "flush": true, 00:09:19.076 "reset": true, 00:09:19.076 "nvme_admin": false, 00:09:19.076 "nvme_io": false, 00:09:19.076 "nvme_io_md": false, 00:09:19.076 "write_zeroes": true, 00:09:19.076 "zcopy": true, 00:09:19.076 "get_zone_info": false, 00:09:19.076 "zone_management": false, 00:09:19.076 "zone_append": false, 00:09:19.076 "compare": false, 00:09:19.076 "compare_and_write": false, 00:09:19.076 "abort": true, 00:09:19.076 "seek_hole": false, 00:09:19.076 "seek_data": false, 00:09:19.076 "copy": true, 00:09:19.076 "nvme_iov_md": false 00:09:19.076 }, 00:09:19.076 "memory_domains": [ 00:09:19.076 { 00:09:19.076 "dma_device_id": "system", 00:09:19.076 "dma_device_type": 1 00:09:19.076 }, 00:09:19.076 { 00:09:19.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.076 "dma_device_type": 2 00:09:19.076 } 00:09:19.076 ], 00:09:19.076 "driver_specific": { 00:09:19.076 "passthru": { 00:09:19.076 "name": "Passthru0", 00:09:19.076 "base_bdev_name": "Malloc2" 00:09:19.076 } 00:09:19.076 } 00:09:19.076 } 00:09:19.076 ]' 00:09:19.076 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:19.336 00:09:19.336 real 0m0.311s 00:09:19.336 user 0m0.196s 00:09:19.336 sys 0m0.053s 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.336 02:15:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:19.336 ************************************ 00:09:19.336 END TEST rpc_daemon_integrity 00:09:19.336 ************************************ 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:19.336 02:15:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:19.336 02:15:09 rpc -- rpc/rpc.sh@84 -- # killprocess 1850482 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@948 -- # '[' -z 1850482 ']' 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@952 -- # kill -0 1850482 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@953 -- # uname 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1850482 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1850482' 00:09:19.336 killing process with pid 1850482 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@967 -- # kill 1850482 00:09:19.336 02:15:09 rpc -- common/autotest_common.sh@972 -- # wait 1850482 00:09:19.905 00:09:19.905 real 0m2.398s 00:09:19.905 user 0m3.060s 00:09:19.905 sys 0m0.929s 00:09:19.905 02:15:10 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.905 02:15:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.905 ************************************ 00:09:19.905 END TEST rpc 00:09:19.905 ************************************ 00:09:19.905 02:15:10 -- common/autotest_common.sh@1142 -- # return 0 00:09:19.905 02:15:10 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:19.905 02:15:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:19.905 02:15:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.905 02:15:10 -- common/autotest_common.sh@10 -- # set +x 00:09:19.905 ************************************ 00:09:19.905 START TEST skip_rpc 00:09:19.905 ************************************ 00:09:19.905 02:15:10 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:19.905 * Looking for test storage... 00:09:19.905 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:09:19.905 02:15:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:09:19.905 02:15:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:09:19.905 02:15:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:19.905 02:15:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:19.905 02:15:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.905 02:15:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.905 ************************************ 00:09:19.905 START TEST skip_rpc 00:09:19.905 ************************************ 00:09:19.905 02:15:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:09:19.905 02:15:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1851001 00:09:19.905 02:15:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:19.905 02:15:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:19.905 02:15:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:19.905 [2024-07-11 02:15:10.324912] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:09:19.905 [2024-07-11 02:15:10.324968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851001 ] 00:09:20.164 [2024-07-11 02:15:10.442924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.164 [2024-07-11 02:15:10.493319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1851001 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1851001 ']' 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1851001 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851001 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851001' 00:09:25.442 killing process with pid 1851001 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1851001 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1851001 00:09:25.442 00:09:25.442 real 0m5.431s 00:09:25.442 user 0m5.072s 00:09:25.442 sys 0m0.374s 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.442 02:15:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.442 ************************************ 00:09:25.442 END TEST skip_rpc 00:09:25.442 ************************************ 00:09:25.442 02:15:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:25.442 02:15:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:25.442 02:15:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.442 02:15:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.442 02:15:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.442 ************************************ 00:09:25.442 START TEST skip_rpc_with_json 00:09:25.442 ************************************ 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1851731 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1851731 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1851731 ']' 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.442 02:15:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:25.442 [2024-07-11 02:15:15.851648] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:09:25.442 [2024-07-11 02:15:15.851725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851731 ] 00:09:25.702 [2024-07-11 02:15:15.990723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.702 [2024-07-11 02:15:16.043160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:26.270 [2024-07-11 02:15:16.656547] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:26.270 request: 00:09:26.270 { 00:09:26.270 "trtype": "tcp", 00:09:26.270 "method": "nvmf_get_transports", 00:09:26.270 "req_id": 1 00:09:26.270 } 00:09:26.270 Got JSON-RPC error response 00:09:26.270 response: 00:09:26.270 { 00:09:26.270 "code": -19, 00:09:26.270 "message": "No such device" 00:09:26.270 } 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:26.270 [2024-07-11 02:15:16.668699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.270 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:26.530 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.530 02:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:09:26.530 { 00:09:26.530 "subsystems": [ 00:09:26.530 { 00:09:26.530 "subsystem": "keyring", 00:09:26.530 "config": [] 00:09:26.530 }, 00:09:26.530 { 00:09:26.530 "subsystem": "iobuf", 00:09:26.530 "config": [ 00:09:26.530 { 00:09:26.530 "method": "iobuf_set_options", 00:09:26.530 "params": { 00:09:26.530 "small_pool_count": 8192, 00:09:26.530 "large_pool_count": 1024, 00:09:26.530 "small_bufsize": 8192, 00:09:26.530 "large_bufsize": 135168 00:09:26.530 } 00:09:26.530 } 00:09:26.530 ] 00:09:26.530 }, 00:09:26.530 { 00:09:26.530 "subsystem": "sock", 00:09:26.530 "config": [ 00:09:26.530 { 00:09:26.530 "method": "sock_set_default_impl", 00:09:26.530 "params": { 00:09:26.530 "impl_name": "posix" 00:09:26.530 } 00:09:26.530 }, 00:09:26.530 { 00:09:26.530 "method": "sock_impl_set_options", 00:09:26.530 "params": { 00:09:26.530 "impl_name": "ssl", 00:09:26.530 "recv_buf_size": 4096, 00:09:26.530 "send_buf_size": 4096, 00:09:26.530 "enable_recv_pipe": true, 00:09:26.530 "enable_quickack": false, 00:09:26.530 "enable_placement_id": 0, 00:09:26.530 "enable_zerocopy_send_server": true, 00:09:26.530 "enable_zerocopy_send_client": false, 00:09:26.530 "zerocopy_threshold": 0, 00:09:26.530 "tls_version": 0, 00:09:26.530 "enable_ktls": false 00:09:26.530 } 00:09:26.530 }, 00:09:26.530 { 00:09:26.530 "method": "sock_impl_set_options", 00:09:26.530 "params": { 00:09:26.530 "impl_name": "posix", 00:09:26.530 "recv_buf_size": 2097152, 00:09:26.530 "send_buf_size": 2097152, 00:09:26.530 "enable_recv_pipe": true, 00:09:26.530 "enable_quickack": false, 00:09:26.530 "enable_placement_id": 0, 00:09:26.530 "enable_zerocopy_send_server": true, 00:09:26.530 "enable_zerocopy_send_client": false, 00:09:26.530 "zerocopy_threshold": 0, 00:09:26.530 "tls_version": 0, 00:09:26.530 "enable_ktls": false 00:09:26.530 } 00:09:26.530 } 00:09:26.530 ] 00:09:26.530 }, 00:09:26.530 { 00:09:26.530 "subsystem": "vmd", 00:09:26.531 "config": [] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "accel", 00:09:26.531 "config": [ 00:09:26.531 { 00:09:26.531 "method": "accel_set_options", 00:09:26.531 "params": { 00:09:26.531 "small_cache_size": 128, 00:09:26.531 "large_cache_size": 16, 00:09:26.531 "task_count": 2048, 00:09:26.531 "sequence_count": 2048, 00:09:26.531 "buf_count": 2048 00:09:26.531 } 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "bdev", 00:09:26.531 "config": [ 00:09:26.531 { 00:09:26.531 "method": "bdev_set_options", 00:09:26.531 "params": { 00:09:26.531 "bdev_io_pool_size": 65535, 00:09:26.531 "bdev_io_cache_size": 256, 00:09:26.531 "bdev_auto_examine": true, 00:09:26.531 "iobuf_small_cache_size": 128, 00:09:26.531 "iobuf_large_cache_size": 16 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "bdev_raid_set_options", 00:09:26.531 "params": { 00:09:26.531 "process_window_size_kb": 1024 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "bdev_iscsi_set_options", 00:09:26.531 "params": { 00:09:26.531 "timeout_sec": 30 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "bdev_nvme_set_options", 00:09:26.531 "params": { 00:09:26.531 "action_on_timeout": "none", 00:09:26.531 "timeout_us": 0, 00:09:26.531 "timeout_admin_us": 0, 00:09:26.531 "keep_alive_timeout_ms": 10000, 00:09:26.531 "arbitration_burst": 0, 00:09:26.531 "low_priority_weight": 0, 00:09:26.531 "medium_priority_weight": 0, 00:09:26.531 "high_priority_weight": 0, 00:09:26.531 "nvme_adminq_poll_period_us": 10000, 00:09:26.531 "nvme_ioq_poll_period_us": 0, 00:09:26.531 "io_queue_requests": 0, 00:09:26.531 "delay_cmd_submit": true, 00:09:26.531 "transport_retry_count": 4, 00:09:26.531 "bdev_retry_count": 3, 00:09:26.531 "transport_ack_timeout": 0, 00:09:26.531 "ctrlr_loss_timeout_sec": 0, 00:09:26.531 "reconnect_delay_sec": 0, 00:09:26.531 "fast_io_fail_timeout_sec": 0, 00:09:26.531 "disable_auto_failback": false, 00:09:26.531 "generate_uuids": false, 00:09:26.531 "transport_tos": 0, 00:09:26.531 "nvme_error_stat": false, 00:09:26.531 "rdma_srq_size": 0, 00:09:26.531 "io_path_stat": false, 00:09:26.531 "allow_accel_sequence": false, 00:09:26.531 "rdma_max_cq_size": 0, 00:09:26.531 "rdma_cm_event_timeout_ms": 0, 00:09:26.531 "dhchap_digests": [ 00:09:26.531 "sha256", 00:09:26.531 "sha384", 00:09:26.531 "sha512" 00:09:26.531 ], 00:09:26.531 "dhchap_dhgroups": [ 00:09:26.531 "null", 00:09:26.531 "ffdhe2048", 00:09:26.531 "ffdhe3072", 00:09:26.531 "ffdhe4096", 00:09:26.531 "ffdhe6144", 00:09:26.531 "ffdhe8192" 00:09:26.531 ] 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "bdev_nvme_set_hotplug", 00:09:26.531 "params": { 00:09:26.531 "period_us": 100000, 00:09:26.531 "enable": false 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "bdev_wait_for_examine" 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "scsi", 00:09:26.531 "config": null 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "scheduler", 00:09:26.531 "config": [ 00:09:26.531 { 00:09:26.531 "method": "framework_set_scheduler", 00:09:26.531 "params": { 00:09:26.531 "name": "static" 00:09:26.531 } 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "vhost_scsi", 00:09:26.531 "config": [] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "vhost_blk", 00:09:26.531 "config": [] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "ublk", 00:09:26.531 "config": [] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "nbd", 00:09:26.531 "config": [] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "nvmf", 00:09:26.531 "config": [ 00:09:26.531 { 00:09:26.531 "method": "nvmf_set_config", 00:09:26.531 "params": { 00:09:26.531 "discovery_filter": "match_any", 00:09:26.531 "admin_cmd_passthru": { 00:09:26.531 "identify_ctrlr": false 00:09:26.531 } 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "nvmf_set_max_subsystems", 00:09:26.531 "params": { 00:09:26.531 "max_subsystems": 1024 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "nvmf_set_crdt", 00:09:26.531 "params": { 00:09:26.531 "crdt1": 0, 00:09:26.531 "crdt2": 0, 00:09:26.531 "crdt3": 0 00:09:26.531 } 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "nvmf_create_transport", 00:09:26.531 "params": { 00:09:26.531 "trtype": "TCP", 00:09:26.531 "max_queue_depth": 128, 00:09:26.531 "max_io_qpairs_per_ctrlr": 127, 00:09:26.531 "in_capsule_data_size": 4096, 00:09:26.531 "max_io_size": 131072, 00:09:26.531 "io_unit_size": 131072, 00:09:26.531 "max_aq_depth": 128, 00:09:26.531 "num_shared_buffers": 511, 00:09:26.531 "buf_cache_size": 4294967295, 00:09:26.531 "dif_insert_or_strip": false, 00:09:26.531 "zcopy": false, 00:09:26.531 "c2h_success": true, 00:09:26.531 "sock_priority": 0, 00:09:26.531 "abort_timeout_sec": 1, 00:09:26.531 "ack_timeout": 0, 00:09:26.531 "data_wr_pool_size": 0 00:09:26.531 } 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "subsystem": "iscsi", 00:09:26.531 "config": [ 00:09:26.531 { 00:09:26.531 "method": "iscsi_set_options", 00:09:26.531 "params": { 00:09:26.531 "node_base": "iqn.2016-06.io.spdk", 00:09:26.531 "max_sessions": 128, 00:09:26.531 "max_connections_per_session": 2, 00:09:26.531 "max_queue_depth": 64, 00:09:26.531 "default_time2wait": 2, 00:09:26.531 "default_time2retain": 20, 00:09:26.531 "first_burst_length": 8192, 00:09:26.531 "immediate_data": true, 00:09:26.531 "allow_duplicated_isid": false, 00:09:26.531 "error_recovery_level": 0, 00:09:26.531 "nop_timeout": 60, 00:09:26.531 "nop_in_interval": 30, 00:09:26.531 "disable_chap": false, 00:09:26.531 "require_chap": false, 00:09:26.531 "mutual_chap": false, 00:09:26.531 "chap_group": 0, 00:09:26.531 "max_large_datain_per_connection": 64, 00:09:26.531 "max_r2t_per_connection": 4, 00:09:26.531 "pdu_pool_size": 36864, 00:09:26.531 "immediate_data_pool_size": 16384, 00:09:26.531 "data_out_pool_size": 2048 00:09:26.531 } 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 } 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1851731 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1851731 ']' 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1851731 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851731 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851731' 00:09:26.531 killing process with pid 1851731 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1851731 00:09:26.531 02:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1851731 00:09:27.101 02:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1851918 00:09:27.101 02:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:27.101 02:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1851918 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1851918 ']' 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1851918 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851918 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851918' 00:09:32.441 killing process with pid 1851918 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1851918 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1851918 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:09:32.441 00:09:32.441 real 0m6.907s 00:09:32.441 user 0m6.478s 00:09:32.441 sys 0m0.865s 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:32.441 ************************************ 00:09:32.441 END TEST skip_rpc_with_json 00:09:32.441 ************************************ 00:09:32.441 02:15:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:32.441 02:15:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:32.441 02:15:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:32.441 02:15:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.441 02:15:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.441 ************************************ 00:09:32.441 START TEST skip_rpc_with_delay 00:09:32.441 ************************************ 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:32.441 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:32.441 [2024-07-11 02:15:22.859494] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:32.441 [2024-07-11 02:15:22.859595] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:32.700 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:32.700 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.700 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.700 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.700 00:09:32.700 real 0m0.096s 00:09:32.700 user 0m0.051s 00:09:32.700 sys 0m0.044s 00:09:32.700 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.700 02:15:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 ************************************ 00:09:32.700 END TEST skip_rpc_with_delay 00:09:32.700 ************************************ 00:09:32.700 02:15:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:32.700 02:15:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:32.700 02:15:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:32.700 02:15:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:32.700 02:15:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:32.700 02:15:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.700 02:15:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 ************************************ 00:09:32.700 START TEST exit_on_failed_rpc_init 00:09:32.700 ************************************ 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1852678 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1852678 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1852678 ']' 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.700 02:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 [2024-07-11 02:15:23.091851] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:09:32.700 [2024-07-11 02:15:23.091985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852678 ] 00:09:32.960 [2024-07-11 02:15:23.304308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.960 [2024-07-11 02:15:23.355685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:33.899 02:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:33.899 [2024-07-11 02:15:24.020886] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:09:33.899 [2024-07-11 02:15:24.020950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852856 ] 00:09:33.899 [2024-07-11 02:15:24.163051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.899 [2024-07-11 02:15:24.214976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.899 [2024-07-11 02:15:24.215066] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:33.899 [2024-07-11 02:15:24.215091] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:33.899 [2024-07-11 02:15:24.215106] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1852678 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1852678 ']' 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1852678 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.899 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1852678 00:09:34.159 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.159 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.159 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1852678' 00:09:34.159 killing process with pid 1852678 00:09:34.159 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1852678 00:09:34.159 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1852678 00:09:34.418 00:09:34.418 real 0m1.753s 00:09:34.418 user 0m1.899s 00:09:34.418 sys 0m0.704s 00:09:34.418 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.418 02:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:34.418 ************************************ 00:09:34.418 END TEST exit_on_failed_rpc_init 00:09:34.418 ************************************ 00:09:34.418 02:15:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:34.418 02:15:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:09:34.418 00:09:34.418 real 0m14.652s 00:09:34.418 user 0m13.677s 00:09:34.418 sys 0m2.310s 00:09:34.418 02:15:24 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.418 02:15:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.418 ************************************ 00:09:34.418 END TEST skip_rpc 00:09:34.419 ************************************ 00:09:34.419 02:15:24 -- common/autotest_common.sh@1142 -- # return 0 00:09:34.419 02:15:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:34.419 02:15:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.419 02:15:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.419 02:15:24 -- common/autotest_common.sh@10 -- # set +x 00:09:34.678 ************************************ 00:09:34.678 START TEST rpc_client 00:09:34.678 ************************************ 00:09:34.678 02:15:24 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:34.678 * Looking for test storage... 00:09:34.678 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:09:34.678 02:15:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:34.678 OK 00:09:34.678 02:15:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:34.678 00:09:34.678 real 0m0.144s 00:09:34.678 user 0m0.052s 00:09:34.678 sys 0m0.102s 00:09:34.678 02:15:24 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.678 02:15:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:34.678 ************************************ 00:09:34.678 END TEST rpc_client 00:09:34.678 ************************************ 00:09:34.678 02:15:25 -- common/autotest_common.sh@1142 -- # return 0 00:09:34.678 02:15:25 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:09:34.678 02:15:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.678 02:15:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.678 02:15:25 -- common/autotest_common.sh@10 -- # set +x 00:09:34.678 ************************************ 00:09:34.678 START TEST json_config 00:09:34.678 ************************************ 00:09:34.678 02:15:25 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:09:34.938 02:15:25 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.938 02:15:25 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.938 02:15:25 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.938 02:15:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.938 02:15:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.938 02:15:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.938 02:15:25 json_config -- paths/export.sh@5 -- # export PATH 00:09:34.938 02:15:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@47 -- # : 0 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.938 02:15:25 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:34.938 02:15:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:34.939 INFO: JSON configuration test init 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:34.939 02:15:25 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:34.939 02:15:25 json_config -- json_config/common.sh@9 -- # local app=target 00:09:34.939 02:15:25 json_config -- json_config/common.sh@10 -- # shift 00:09:34.939 02:15:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:34.939 02:15:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:34.939 02:15:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:34.939 02:15:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:34.939 02:15:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:34.939 02:15:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1853136 00:09:34.939 02:15:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:34.939 Waiting for target to run... 00:09:34.939 02:15:25 json_config -- json_config/common.sh@25 -- # waitforlisten 1853136 /var/tmp/spdk_tgt.sock 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 1853136 ']' 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:34.939 02:15:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:34.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.939 02:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:34.939 [2024-07-11 02:15:25.280609] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:09:34.939 [2024-07-11 02:15:25.280684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1853136 ] 00:09:35.507 [2024-07-11 02:15:25.851697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.507 [2024-07-11 02:15:25.887247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.075 02:15:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.075 02:15:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:36.075 02:15:26 json_config -- json_config/common.sh@26 -- # echo '' 00:09:36.075 00:09:36.075 02:15:26 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:36.075 02:15:26 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:36.075 02:15:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.075 02:15:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:36.075 02:15:26 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:09:36.075 02:15:26 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:09:36.075 02:15:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:09:36.075 02:15:26 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:09:36.075 02:15:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:09:36.335 [2024-07-11 02:15:26.689731] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:09:36.335 02:15:26 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:09:36.335 02:15:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:09:36.594 [2024-07-11 02:15:26.938373] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:09:36.594 02:15:26 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:36.594 02:15:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.594 02:15:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:36.594 02:15:27 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:36.594 02:15:27 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:36.594 02:15:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:36.855 [2024-07-11 02:15:27.252136] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:09:42.127 02:15:32 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:42.127 02:15:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:42.127 02:15:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:42.127 02:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:42.127 02:15:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:42.127 02:15:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:42.127 02:15:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:42.127 02:15:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:42.127 02:15:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:42.127 02:15:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:42.386 02:15:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:42.386 02:15:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:42.386 02:15:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:42.386 02:15:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:42.387 02:15:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.387 02:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:42.646 02:15:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:42.646 02:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:42.646 02:15:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:42.646 02:15:32 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:42.905 02:15:33 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:42.905 02:15:33 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:42.905 02:15:33 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:42.905 02:15:33 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:42.905 02:15:33 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:42.905 02:15:33 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:42.905 02:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:42.905 Nvme0n1p0 Nvme0n1p1 00:09:42.905 02:15:33 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:42.905 02:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:43.164 [2024-07-11 02:15:33.541081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:43.164 [2024-07-11 02:15:33.541134] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:43.164 00:09:43.164 02:15:33 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:43.164 02:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:43.423 Malloc3 00:09:43.423 02:15:33 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:43.423 02:15:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:43.683 [2024-07-11 02:15:34.030473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:43.683 [2024-07-11 02:15:34.030525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.683 [2024-07-11 02:15:34.030546] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195c830 00:09:43.683 [2024-07-11 02:15:34.030559] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.683 [2024-07-11 02:15:34.032140] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.683 [2024-07-11 02:15:34.032172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:43.683 PTBdevFromMalloc3 00:09:43.683 02:15:34 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:43.683 02:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:44.251 Null0 00:09:44.251 02:15:34 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:44.251 02:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:44.510 Malloc0 00:09:44.510 02:15:34 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:44.510 02:15:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:45.078 Malloc1 00:09:45.078 02:15:35 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:45.078 02:15:35 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:45.338 102400+0 records in 00:09:45.338 102400+0 records out 00:09:45.338 104857600 bytes (105 MB, 100 MiB) copied, 0.305872 s, 343 MB/s 00:09:45.338 02:15:35 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:45.338 02:15:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:45.907 aio_disk 00:09:45.907 02:15:36 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:45.907 02:15:36 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:45.907 02:15:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:54.033 cae05605-739a-4060-877b-6b8ec5509828 00:09:54.033 02:15:43 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:54.033 02:15:43 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:54.033 02:15:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:54.033 02:15:43 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:54.033 02:15:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:54.033 02:15:44 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:54.033 02:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:54.033 02:15:44 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:54.033 02:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:54.292 02:15:44 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:09:54.292 02:15:44 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:09:54.292 02:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:09:54.861 MallocForCryptoBdev 00:09:54.861 02:15:45 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:09:54.861 02:15:45 json_config -- json_config/json_config.sh@159 -- # wc -l 00:09:54.861 02:15:45 json_config -- json_config/json_config.sh@159 -- # [[ 3 -eq 0 ]] 00:09:54.861 02:15:45 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:09:54.861 02:15:45 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:09:54.861 02:15:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:09:55.430 [2024-07-11 02:15:45.703173] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:09:55.430 CryptoMallocBdev 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:04da0b5f-cf33-4f8d-aa34-638a450975fe bdev_register:354272b4-37a6-478c-aeab-5df00cd11699 bdev_register:6a52c3aa-e400-493d-b890-a8ea51c040ff bdev_register:1aa0dd8f-3db4-49ad-9bad-c91fdd077d6f bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:04da0b5f-cf33-4f8d-aa34-638a450975fe bdev_register:354272b4-37a6-478c-aeab-5df00cd11699 bdev_register:6a52c3aa-e400-493d-b890-a8ea51c040ff bdev_register:1aa0dd8f-3db4-49ad-9bad-c91fdd077d6f bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@71 -- # sort 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:09:55.430 02:15:45 json_config -- json_config/json_config.sh@72 -- # sort 00:09:55.431 02:15:45 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:55.431 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.431 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.431 02:15:45 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:55.431 02:15:45 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:55.431 02:15:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:354272b4-37a6-478c-aeab-5df00cd11699 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:04da0b5f-cf33-4f8d-aa34-638a450975fe 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:6a52c3aa-e400-493d-b890-a8ea51c040ff 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:1aa0dd8f-3db4-49ad-9bad-c91fdd077d6f 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:04da0b5f-cf33-4f8d-aa34-638a450975fe bdev_register:1aa0dd8f-3db4-49ad-9bad-c91fdd077d6f bdev_register:354272b4-37a6-478c-aeab-5df00cd11699 bdev_register:6a52c3aa-e400-493d-b890-a8ea51c040ff bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\4\d\a\0\b\5\f\-\c\f\3\3\-\4\f\8\d\-\a\a\3\4\-\6\3\8\a\4\5\0\9\7\5\f\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\a\a\0\d\d\8\f\-\3\d\b\4\-\4\9\a\d\-\9\b\a\d\-\c\9\1\f\d\d\0\7\7\d\6\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\5\4\2\7\2\b\4\-\3\7\a\6\-\4\7\8\c\-\a\e\a\b\-\5\d\f\0\0\c\d\1\1\6\9\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\a\5\2\c\3\a\a\-\e\4\0\0\-\4\9\3\d\-\b\8\9\0\-\a\8\e\a\5\1\c\0\4\0\f\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@86 -- # cat 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:04da0b5f-cf33-4f8d-aa34-638a450975fe bdev_register:1aa0dd8f-3db4-49ad-9bad-c91fdd077d6f bdev_register:354272b4-37a6-478c-aeab-5df00cd11699 bdev_register:6a52c3aa-e400-493d-b890-a8ea51c040ff bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:09:55.691 Expected events matched: 00:09:55.691 bdev_register:04da0b5f-cf33-4f8d-aa34-638a450975fe 00:09:55.691 bdev_register:1aa0dd8f-3db4-49ad-9bad-c91fdd077d6f 00:09:55.691 bdev_register:354272b4-37a6-478c-aeab-5df00cd11699 00:09:55.691 bdev_register:6a52c3aa-e400-493d-b890-a8ea51c040ff 00:09:55.691 bdev_register:aio_disk 00:09:55.691 bdev_register:CryptoMallocBdev 00:09:55.691 bdev_register:Malloc0 00:09:55.691 bdev_register:Malloc0p0 00:09:55.691 bdev_register:Malloc0p1 00:09:55.691 bdev_register:Malloc0p2 00:09:55.691 bdev_register:Malloc1 00:09:55.691 bdev_register:Malloc3 00:09:55.691 bdev_register:MallocForCryptoBdev 00:09:55.691 bdev_register:Null0 00:09:55.691 bdev_register:Nvme0n1 00:09:55.691 bdev_register:Nvme0n1p0 00:09:55.691 bdev_register:Nvme0n1p1 00:09:55.691 bdev_register:PTBdevFromMalloc3 00:09:55.691 02:15:45 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:09:55.691 02:15:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.691 02:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.691 02:15:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:55.691 02:15:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:55.691 02:15:46 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:55.691 02:15:46 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:55.691 02:15:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.691 02:15:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.691 02:15:46 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:55.691 02:15:46 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:55.691 02:15:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:55.951 MallocBdevForConfigChangeCheck 00:09:55.951 02:15:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:55.951 02:15:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.951 02:15:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.951 02:15:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:55.951 02:15:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:56.521 02:15:46 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:09:56.521 INFO: shutting down applications... 00:09:56.521 02:15:46 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:09:56.521 02:15:46 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:09:56.521 02:15:46 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:09:56.521 02:15:46 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:56.521 [2024-07-11 02:15:46.934989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:00.717 Calling clear_iscsi_subsystem 00:10:00.717 Calling clear_nvmf_subsystem 00:10:00.717 Calling clear_nbd_subsystem 00:10:00.717 Calling clear_ublk_subsystem 00:10:00.717 Calling clear_vhost_blk_subsystem 00:10:00.717 Calling clear_vhost_scsi_subsystem 00:10:00.717 Calling clear_bdev_subsystem 00:10:00.717 02:15:51 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:10:00.717 02:15:51 json_config -- json_config/json_config.sh@343 -- # count=100 00:10:00.717 02:15:51 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:00.717 02:15:51 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:00.717 02:15:51 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:00.717 02:15:51 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:10:01.286 02:15:51 json_config -- json_config/json_config.sh@345 -- # break 00:10:01.286 02:15:51 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:01.286 02:15:51 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:01.286 02:15:51 json_config -- json_config/common.sh@31 -- # local app=target 00:10:01.286 02:15:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:01.286 02:15:51 json_config -- json_config/common.sh@35 -- # [[ -n 1853136 ]] 00:10:01.286 02:15:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1853136 00:10:01.286 02:15:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:01.286 02:15:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:01.286 02:15:51 json_config -- json_config/common.sh@41 -- # kill -0 1853136 00:10:01.286 02:15:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:01.546 02:15:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:01.546 02:15:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:01.546 02:15:51 json_config -- json_config/common.sh@41 -- # kill -0 1853136 00:10:01.546 02:15:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:01.546 02:15:51 json_config -- json_config/common.sh@43 -- # break 00:10:01.546 02:15:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:01.546 02:15:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:01.546 SPDK target shutdown done 00:10:01.546 02:15:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:01.546 INFO: relaunching applications... 00:10:01.546 02:15:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:10:01.546 02:15:51 json_config -- json_config/common.sh@9 -- # local app=target 00:10:01.546 02:15:51 json_config -- json_config/common.sh@10 -- # shift 00:10:01.546 02:15:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:01.546 02:15:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:01.546 02:15:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:01.546 02:15:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.546 02:15:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.546 02:15:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1856783 00:10:01.546 02:15:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:01.546 Waiting for target to run... 00:10:01.546 02:15:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:10:01.546 02:15:51 json_config -- json_config/common.sh@25 -- # waitforlisten 1856783 /var/tmp/spdk_tgt.sock 00:10:01.546 02:15:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 1856783 ']' 00:10:01.546 02:15:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:01.546 02:15:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.546 02:15:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:01.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:01.546 02:15:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.546 02:15:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.806 [2024-07-11 02:15:51.999955] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:01.806 [2024-07-11 02:15:52.000042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856783 ] 00:10:02.375 [2024-07-11 02:15:52.638713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.375 [2024-07-11 02:15:52.672570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.375 [2024-07-11 02:15:52.726728] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:10:02.375 [2024-07-11 02:15:52.734771] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:10:02.375 [2024-07-11 02:15:52.742788] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:10:02.635 [2024-07-11 02:15:52.823974] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:10:05.174 [2024-07-11 02:15:55.173802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:05.174 [2024-07-11 02:15:55.173871] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:05.174 [2024-07-11 02:15:55.173885] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:05.174 [2024-07-11 02:15:55.181819] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:05.174 [2024-07-11 02:15:55.181853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:05.174 [2024-07-11 02:15:55.189826] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:05.174 [2024-07-11 02:15:55.189851] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:05.174 [2024-07-11 02:15:55.197861] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:10:05.174 [2024-07-11 02:15:55.197886] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:10:05.174 [2024-07-11 02:15:55.197899] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:07.710 [2024-07-11 02:15:58.093984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:07.710 [2024-07-11 02:15:58.094034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.710 [2024-07-11 02:15:58.094051] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f22dc0 00:10:07.710 [2024-07-11 02:15:58.094063] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.710 [2024-07-11 02:15:58.094335] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.710 [2024-07-11 02:15:58.094352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:08.276 02:15:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.276 02:15:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:08.276 02:15:58 json_config -- json_config/common.sh@26 -- # echo '' 00:10:08.276 00:10:08.276 02:15:58 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:08.276 02:15:58 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:08.276 INFO: Checking if target configuration is the same... 00:10:08.276 02:15:58 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:10:08.276 02:15:58 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:08.276 02:15:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:08.276 + '[' 2 -ne 2 ']' 00:10:08.276 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:08.276 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:10:08.276 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:10:08.276 +++ basename /dev/fd/62 00:10:08.276 ++ mktemp /tmp/62.XXX 00:10:08.276 + tmp_file_1=/tmp/62.WFE 00:10:08.276 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:10:08.276 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:08.276 + tmp_file_2=/tmp/spdk_tgt_config.json.sfy 00:10:08.276 + ret=0 00:10:08.276 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:08.595 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:08.854 + diff -u /tmp/62.WFE /tmp/spdk_tgt_config.json.sfy 00:10:08.854 + echo 'INFO: JSON config files are the same' 00:10:08.854 INFO: JSON config files are the same 00:10:08.854 + rm /tmp/62.WFE /tmp/spdk_tgt_config.json.sfy 00:10:08.854 + exit 0 00:10:08.854 02:15:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:08.854 02:15:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:08.854 INFO: changing configuration and checking if this can be detected... 00:10:08.854 02:15:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:08.854 02:15:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:09.163 02:15:59 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:10:09.163 02:15:59 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:09.163 02:15:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:09.163 + '[' 2 -ne 2 ']' 00:10:09.163 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:09.163 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:10:09.163 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:10:09.163 +++ basename /dev/fd/62 00:10:09.163 ++ mktemp /tmp/62.XXX 00:10:09.163 + tmp_file_1=/tmp/62.CFw 00:10:09.163 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:10:09.163 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:09.163 + tmp_file_2=/tmp/spdk_tgt_config.json.s8s 00:10:09.163 + ret=0 00:10:09.163 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:09.423 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:09.423 + diff -u /tmp/62.CFw /tmp/spdk_tgt_config.json.s8s 00:10:09.423 + ret=1 00:10:09.423 + echo '=== Start of file: /tmp/62.CFw ===' 00:10:09.423 + cat /tmp/62.CFw 00:10:09.423 + echo '=== End of file: /tmp/62.CFw ===' 00:10:09.423 + echo '' 00:10:09.423 + echo '=== Start of file: /tmp/spdk_tgt_config.json.s8s ===' 00:10:09.423 + cat /tmp/spdk_tgt_config.json.s8s 00:10:09.423 + echo '=== End of file: /tmp/spdk_tgt_config.json.s8s ===' 00:10:09.423 + echo '' 00:10:09.423 + rm /tmp/62.CFw /tmp/spdk_tgt_config.json.s8s 00:10:09.423 + exit 1 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:09.423 INFO: configuration change detected. 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:09.423 02:15:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.423 02:15:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@317 -- # [[ -n 1856783 ]] 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:09.423 02:15:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.423 02:15:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:09.423 02:15:59 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:09.423 02:15:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:09.682 02:15:59 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:09.682 02:15:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:09.940 02:16:00 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:09.940 02:16:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:10.199 02:16:00 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:10.199 02:16:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:10.461 02:16:00 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:10.461 02:16:00 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:10.461 02:16:00 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:10.461 02:16:00 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:10.461 02:16:00 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:10.461 02:16:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.461 02:16:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 02:16:00 json_config -- json_config/json_config.sh@323 -- # killprocess 1856783 00:10:10.461 02:16:00 json_config -- common/autotest_common.sh@948 -- # '[' -z 1856783 ']' 00:10:10.461 02:16:00 json_config -- common/autotest_common.sh@952 -- # kill -0 1856783 00:10:10.461 02:16:00 json_config -- common/autotest_common.sh@953 -- # uname 00:10:10.462 02:16:00 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.462 02:16:00 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856783 00:10:10.462 02:16:00 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:10.462 02:16:00 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:10.462 02:16:00 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856783' 00:10:10.462 killing process with pid 1856783 00:10:10.462 02:16:00 json_config -- common/autotest_common.sh@967 -- # kill 1856783 00:10:10.462 02:16:00 json_config -- common/autotest_common.sh@972 -- # wait 1856783 00:10:14.651 02:16:04 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:10:14.651 02:16:04 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:14.651 02:16:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:14.651 02:16:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 02:16:04 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:14.651 02:16:04 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:14.651 INFO: Success 00:10:14.651 00:10:14.651 real 0m39.792s 00:10:14.651 user 0m45.312s 00:10:14.651 sys 0m4.949s 00:10:14.651 02:16:04 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.651 02:16:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 ************************************ 00:10:14.651 END TEST json_config 00:10:14.651 ************************************ 00:10:14.651 02:16:04 -- common/autotest_common.sh@1142 -- # return 0 00:10:14.651 02:16:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:14.651 02:16:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:14.651 02:16:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.651 02:16:04 -- common/autotest_common.sh@10 -- # set +x 00:10:14.651 ************************************ 00:10:14.651 START TEST json_config_extra_key 00:10:14.651 ************************************ 00:10:14.651 02:16:04 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:10:14.651 02:16:05 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.651 02:16:05 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.651 02:16:05 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.651 02:16:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.651 02:16:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.651 02:16:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.651 02:16:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:14.651 02:16:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.651 02:16:05 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:14.651 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:14.937 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:14.937 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:14.937 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:14.937 INFO: launching applications... 00:10:14.937 02:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1858571 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:14.937 Waiting for target to run... 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1858571 /var/tmp/spdk_tgt.sock 00:10:14.937 02:16:05 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1858571 ']' 00:10:14.937 02:16:05 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:14.937 02:16:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:10:14.937 02:16:05 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.937 02:16:05 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:14.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:14.937 02:16:05 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.937 02:16:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:14.937 [2024-07-11 02:16:05.148012] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:14.937 [2024-07-11 02:16:05.148082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858571 ] 00:10:15.197 [2024-07-11 02:16:05.509818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.197 [2024-07-11 02:16:05.539722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.766 02:16:06 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.766 02:16:06 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:15.766 00:10:15.766 02:16:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:15.766 INFO: shutting down applications... 00:10:15.766 02:16:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1858571 ]] 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1858571 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1858571 00:10:15.766 02:16:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.335 02:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.335 02:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.335 02:16:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1858571 00:10:16.335 02:16:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:16.335 02:16:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:16.335 02:16:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:16.335 02:16:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:16.335 SPDK target shutdown done 00:10:16.335 02:16:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:16.335 Success 00:10:16.335 00:10:16.335 real 0m1.617s 00:10:16.335 user 0m1.273s 00:10:16.335 sys 0m0.516s 00:10:16.335 02:16:06 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.335 02:16:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:16.335 ************************************ 00:10:16.335 END TEST json_config_extra_key 00:10:16.335 ************************************ 00:10:16.335 02:16:06 -- common/autotest_common.sh@1142 -- # return 0 00:10:16.335 02:16:06 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:16.335 02:16:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:16.335 02:16:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.335 02:16:06 -- common/autotest_common.sh@10 -- # set +x 00:10:16.335 ************************************ 00:10:16.335 START TEST alias_rpc 00:10:16.335 ************************************ 00:10:16.335 02:16:06 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:16.594 * Looking for test storage... 00:10:16.594 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:10:16.594 02:16:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:16.594 02:16:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1858874 00:10:16.594 02:16:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1858874 00:10:16.594 02:16:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:10:16.594 02:16:06 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1858874 ']' 00:10:16.594 02:16:06 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.594 02:16:06 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.594 02:16:06 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.594 02:16:06 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.594 02:16:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.594 [2024-07-11 02:16:06.840459] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:16.594 [2024-07-11 02:16:06.840530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858874 ] 00:10:16.594 [2024-07-11 02:16:06.972147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.854 [2024-07-11 02:16:07.019660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.422 02:16:07 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.422 02:16:07 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:17.422 02:16:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:17.681 02:16:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1858874 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1858874 ']' 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1858874 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1858874 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1858874' 00:10:17.681 killing process with pid 1858874 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@967 -- # kill 1858874 00:10:17.681 02:16:08 alias_rpc -- common/autotest_common.sh@972 -- # wait 1858874 00:10:18.248 00:10:18.248 real 0m1.791s 00:10:18.248 user 0m1.969s 00:10:18.248 sys 0m0.576s 00:10:18.248 02:16:08 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.248 02:16:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.248 ************************************ 00:10:18.248 END TEST alias_rpc 00:10:18.248 ************************************ 00:10:18.248 02:16:08 -- common/autotest_common.sh@1142 -- # return 0 00:10:18.248 02:16:08 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:18.248 02:16:08 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:18.248 02:16:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:18.249 02:16:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.249 02:16:08 -- common/autotest_common.sh@10 -- # set +x 00:10:18.249 ************************************ 00:10:18.249 START TEST spdkcli_tcp 00:10:18.249 ************************************ 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:18.249 * Looking for test storage... 00:10:18.249 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1859109 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1859109 00:10:18.249 02:16:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1859109 ']' 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.249 02:16:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.507 [2024-07-11 02:16:08.731944] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:18.507 [2024-07-11 02:16:08.732011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859109 ] 00:10:18.507 [2024-07-11 02:16:08.867964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:18.507 [2024-07-11 02:16:08.918901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.507 [2024-07-11 02:16:08.918907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.445 02:16:09 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.445 02:16:09 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:10:19.445 02:16:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1859285 00:10:19.445 02:16:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:19.445 02:16:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:19.445 [ 00:10:19.445 "bdev_malloc_delete", 00:10:19.445 "bdev_malloc_create", 00:10:19.445 "bdev_null_resize", 00:10:19.445 "bdev_null_delete", 00:10:19.445 "bdev_null_create", 00:10:19.445 "bdev_nvme_cuse_unregister", 00:10:19.445 "bdev_nvme_cuse_register", 00:10:19.445 "bdev_opal_new_user", 00:10:19.445 "bdev_opal_set_lock_state", 00:10:19.445 "bdev_opal_delete", 00:10:19.445 "bdev_opal_get_info", 00:10:19.445 "bdev_opal_create", 00:10:19.445 "bdev_nvme_opal_revert", 00:10:19.445 "bdev_nvme_opal_init", 00:10:19.445 "bdev_nvme_send_cmd", 00:10:19.445 "bdev_nvme_get_path_iostat", 00:10:19.445 "bdev_nvme_get_mdns_discovery_info", 00:10:19.445 "bdev_nvme_stop_mdns_discovery", 00:10:19.445 "bdev_nvme_start_mdns_discovery", 00:10:19.445 "bdev_nvme_set_multipath_policy", 00:10:19.445 "bdev_nvme_set_preferred_path", 00:10:19.445 "bdev_nvme_get_io_paths", 00:10:19.446 "bdev_nvme_remove_error_injection", 00:10:19.446 "bdev_nvme_add_error_injection", 00:10:19.446 "bdev_nvme_get_discovery_info", 00:10:19.446 "bdev_nvme_stop_discovery", 00:10:19.446 "bdev_nvme_start_discovery", 00:10:19.446 "bdev_nvme_get_controller_health_info", 00:10:19.446 "bdev_nvme_disable_controller", 00:10:19.446 "bdev_nvme_enable_controller", 00:10:19.446 "bdev_nvme_reset_controller", 00:10:19.446 "bdev_nvme_get_transport_statistics", 00:10:19.446 "bdev_nvme_apply_firmware", 00:10:19.446 "bdev_nvme_detach_controller", 00:10:19.446 "bdev_nvme_get_controllers", 00:10:19.446 "bdev_nvme_attach_controller", 00:10:19.446 "bdev_nvme_set_hotplug", 00:10:19.446 "bdev_nvme_set_options", 00:10:19.446 "bdev_passthru_delete", 00:10:19.446 "bdev_passthru_create", 00:10:19.446 "bdev_lvol_set_parent_bdev", 00:10:19.446 "bdev_lvol_set_parent", 00:10:19.446 "bdev_lvol_check_shallow_copy", 00:10:19.446 "bdev_lvol_start_shallow_copy", 00:10:19.446 "bdev_lvol_grow_lvstore", 00:10:19.446 "bdev_lvol_get_lvols", 00:10:19.446 "bdev_lvol_get_lvstores", 00:10:19.446 "bdev_lvol_delete", 00:10:19.446 "bdev_lvol_set_read_only", 00:10:19.446 "bdev_lvol_resize", 00:10:19.446 "bdev_lvol_decouple_parent", 00:10:19.446 "bdev_lvol_inflate", 00:10:19.446 "bdev_lvol_rename", 00:10:19.446 "bdev_lvol_clone_bdev", 00:10:19.446 "bdev_lvol_clone", 00:10:19.446 "bdev_lvol_snapshot", 00:10:19.446 "bdev_lvol_create", 00:10:19.446 "bdev_lvol_delete_lvstore", 00:10:19.446 "bdev_lvol_rename_lvstore", 00:10:19.446 "bdev_lvol_create_lvstore", 00:10:19.446 "bdev_raid_set_options", 00:10:19.446 "bdev_raid_remove_base_bdev", 00:10:19.446 "bdev_raid_add_base_bdev", 00:10:19.446 "bdev_raid_delete", 00:10:19.446 "bdev_raid_create", 00:10:19.446 "bdev_raid_get_bdevs", 00:10:19.446 "bdev_error_inject_error", 00:10:19.446 "bdev_error_delete", 00:10:19.446 "bdev_error_create", 00:10:19.446 "bdev_split_delete", 00:10:19.446 "bdev_split_create", 00:10:19.446 "bdev_delay_delete", 00:10:19.446 "bdev_delay_create", 00:10:19.446 "bdev_delay_update_latency", 00:10:19.446 "bdev_zone_block_delete", 00:10:19.446 "bdev_zone_block_create", 00:10:19.446 "blobfs_create", 00:10:19.446 "blobfs_detect", 00:10:19.446 "blobfs_set_cache_size", 00:10:19.446 "bdev_crypto_delete", 00:10:19.446 "bdev_crypto_create", 00:10:19.446 "bdev_compress_delete", 00:10:19.446 "bdev_compress_create", 00:10:19.446 "bdev_compress_get_orphans", 00:10:19.446 "bdev_aio_delete", 00:10:19.446 "bdev_aio_rescan", 00:10:19.446 "bdev_aio_create", 00:10:19.446 "bdev_ftl_set_property", 00:10:19.446 "bdev_ftl_get_properties", 00:10:19.446 "bdev_ftl_get_stats", 00:10:19.446 "bdev_ftl_unmap", 00:10:19.446 "bdev_ftl_unload", 00:10:19.446 "bdev_ftl_delete", 00:10:19.446 "bdev_ftl_load", 00:10:19.446 "bdev_ftl_create", 00:10:19.446 "bdev_virtio_attach_controller", 00:10:19.446 "bdev_virtio_scsi_get_devices", 00:10:19.446 "bdev_virtio_detach_controller", 00:10:19.446 "bdev_virtio_blk_set_hotplug", 00:10:19.446 "bdev_iscsi_delete", 00:10:19.446 "bdev_iscsi_create", 00:10:19.446 "bdev_iscsi_set_options", 00:10:19.446 "accel_error_inject_error", 00:10:19.446 "ioat_scan_accel_module", 00:10:19.446 "dsa_scan_accel_module", 00:10:19.446 "iaa_scan_accel_module", 00:10:19.446 "dpdk_cryptodev_get_driver", 00:10:19.446 "dpdk_cryptodev_set_driver", 00:10:19.446 "dpdk_cryptodev_scan_accel_module", 00:10:19.446 "compressdev_scan_accel_module", 00:10:19.446 "keyring_file_remove_key", 00:10:19.446 "keyring_file_add_key", 00:10:19.446 "keyring_linux_set_options", 00:10:19.446 "iscsi_get_histogram", 00:10:19.446 "iscsi_enable_histogram", 00:10:19.446 "iscsi_set_options", 00:10:19.446 "iscsi_get_auth_groups", 00:10:19.446 "iscsi_auth_group_remove_secret", 00:10:19.446 "iscsi_auth_group_add_secret", 00:10:19.446 "iscsi_delete_auth_group", 00:10:19.446 "iscsi_create_auth_group", 00:10:19.446 "iscsi_set_discovery_auth", 00:10:19.446 "iscsi_get_options", 00:10:19.446 "iscsi_target_node_request_logout", 00:10:19.446 "iscsi_target_node_set_redirect", 00:10:19.446 "iscsi_target_node_set_auth", 00:10:19.446 "iscsi_target_node_add_lun", 00:10:19.446 "iscsi_get_stats", 00:10:19.446 "iscsi_get_connections", 00:10:19.446 "iscsi_portal_group_set_auth", 00:10:19.446 "iscsi_start_portal_group", 00:10:19.446 "iscsi_delete_portal_group", 00:10:19.446 "iscsi_create_portal_group", 00:10:19.446 "iscsi_get_portal_groups", 00:10:19.446 "iscsi_delete_target_node", 00:10:19.446 "iscsi_target_node_remove_pg_ig_maps", 00:10:19.446 "iscsi_target_node_add_pg_ig_maps", 00:10:19.446 "iscsi_create_target_node", 00:10:19.446 "iscsi_get_target_nodes", 00:10:19.446 "iscsi_delete_initiator_group", 00:10:19.446 "iscsi_initiator_group_remove_initiators", 00:10:19.446 "iscsi_initiator_group_add_initiators", 00:10:19.446 "iscsi_create_initiator_group", 00:10:19.446 "iscsi_get_initiator_groups", 00:10:19.446 "nvmf_set_crdt", 00:10:19.446 "nvmf_set_config", 00:10:19.446 "nvmf_set_max_subsystems", 00:10:19.446 "nvmf_stop_mdns_prr", 00:10:19.446 "nvmf_publish_mdns_prr", 00:10:19.446 "nvmf_subsystem_get_listeners", 00:10:19.446 "nvmf_subsystem_get_qpairs", 00:10:19.446 "nvmf_subsystem_get_controllers", 00:10:19.446 "nvmf_get_stats", 00:10:19.446 "nvmf_get_transports", 00:10:19.446 "nvmf_create_transport", 00:10:19.446 "nvmf_get_targets", 00:10:19.446 "nvmf_delete_target", 00:10:19.446 "nvmf_create_target", 00:10:19.446 "nvmf_subsystem_allow_any_host", 00:10:19.446 "nvmf_subsystem_remove_host", 00:10:19.446 "nvmf_subsystem_add_host", 00:10:19.446 "nvmf_ns_remove_host", 00:10:19.446 "nvmf_ns_add_host", 00:10:19.446 "nvmf_subsystem_remove_ns", 00:10:19.446 "nvmf_subsystem_add_ns", 00:10:19.446 "nvmf_subsystem_listener_set_ana_state", 00:10:19.446 "nvmf_discovery_get_referrals", 00:10:19.446 "nvmf_discovery_remove_referral", 00:10:19.446 "nvmf_discovery_add_referral", 00:10:19.446 "nvmf_subsystem_remove_listener", 00:10:19.446 "nvmf_subsystem_add_listener", 00:10:19.446 "nvmf_delete_subsystem", 00:10:19.446 "nvmf_create_subsystem", 00:10:19.446 "nvmf_get_subsystems", 00:10:19.446 "env_dpdk_get_mem_stats", 00:10:19.447 "nbd_get_disks", 00:10:19.447 "nbd_stop_disk", 00:10:19.447 "nbd_start_disk", 00:10:19.447 "ublk_recover_disk", 00:10:19.447 "ublk_get_disks", 00:10:19.447 "ublk_stop_disk", 00:10:19.447 "ublk_start_disk", 00:10:19.447 "ublk_destroy_target", 00:10:19.447 "ublk_create_target", 00:10:19.447 "virtio_blk_create_transport", 00:10:19.447 "virtio_blk_get_transports", 00:10:19.447 "vhost_controller_set_coalescing", 00:10:19.447 "vhost_get_controllers", 00:10:19.447 "vhost_delete_controller", 00:10:19.447 "vhost_create_blk_controller", 00:10:19.447 "vhost_scsi_controller_remove_target", 00:10:19.447 "vhost_scsi_controller_add_target", 00:10:19.447 "vhost_start_scsi_controller", 00:10:19.447 "vhost_create_scsi_controller", 00:10:19.447 "thread_set_cpumask", 00:10:19.447 "framework_get_governor", 00:10:19.447 "framework_get_scheduler", 00:10:19.447 "framework_set_scheduler", 00:10:19.447 "framework_get_reactors", 00:10:19.447 "thread_get_io_channels", 00:10:19.447 "thread_get_pollers", 00:10:19.447 "thread_get_stats", 00:10:19.447 "framework_monitor_context_switch", 00:10:19.447 "spdk_kill_instance", 00:10:19.447 "log_enable_timestamps", 00:10:19.447 "log_get_flags", 00:10:19.447 "log_clear_flag", 00:10:19.447 "log_set_flag", 00:10:19.447 "log_get_level", 00:10:19.447 "log_set_level", 00:10:19.447 "log_get_print_level", 00:10:19.447 "log_set_print_level", 00:10:19.447 "framework_enable_cpumask_locks", 00:10:19.447 "framework_disable_cpumask_locks", 00:10:19.447 "framework_wait_init", 00:10:19.447 "framework_start_init", 00:10:19.447 "scsi_get_devices", 00:10:19.447 "bdev_get_histogram", 00:10:19.447 "bdev_enable_histogram", 00:10:19.447 "bdev_set_qos_limit", 00:10:19.447 "bdev_set_qd_sampling_period", 00:10:19.447 "bdev_get_bdevs", 00:10:19.447 "bdev_reset_iostat", 00:10:19.447 "bdev_get_iostat", 00:10:19.447 "bdev_examine", 00:10:19.447 "bdev_wait_for_examine", 00:10:19.447 "bdev_set_options", 00:10:19.447 "notify_get_notifications", 00:10:19.447 "notify_get_types", 00:10:19.447 "accel_get_stats", 00:10:19.447 "accel_set_options", 00:10:19.447 "accel_set_driver", 00:10:19.447 "accel_crypto_key_destroy", 00:10:19.447 "accel_crypto_keys_get", 00:10:19.447 "accel_crypto_key_create", 00:10:19.447 "accel_assign_opc", 00:10:19.447 "accel_get_module_info", 00:10:19.447 "accel_get_opc_assignments", 00:10:19.447 "vmd_rescan", 00:10:19.447 "vmd_remove_device", 00:10:19.447 "vmd_enable", 00:10:19.447 "sock_get_default_impl", 00:10:19.447 "sock_set_default_impl", 00:10:19.447 "sock_impl_set_options", 00:10:19.447 "sock_impl_get_options", 00:10:19.447 "iobuf_get_stats", 00:10:19.447 "iobuf_set_options", 00:10:19.447 "framework_get_pci_devices", 00:10:19.447 "framework_get_config", 00:10:19.447 "framework_get_subsystems", 00:10:19.447 "trace_get_info", 00:10:19.447 "trace_get_tpoint_group_mask", 00:10:19.447 "trace_disable_tpoint_group", 00:10:19.447 "trace_enable_tpoint_group", 00:10:19.447 "trace_clear_tpoint_mask", 00:10:19.447 "trace_set_tpoint_mask", 00:10:19.447 "keyring_get_keys", 00:10:19.447 "spdk_get_version", 00:10:19.447 "rpc_get_methods" 00:10:19.447 ] 00:10:19.447 02:16:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:19.447 02:16:09 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.447 02:16:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.707 02:16:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:19.707 02:16:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1859109 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1859109 ']' 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1859109 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859109 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859109' 00:10:19.707 killing process with pid 1859109 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1859109 00:10:19.707 02:16:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1859109 00:10:19.966 00:10:19.966 real 0m1.752s 00:10:19.966 user 0m3.161s 00:10:19.966 sys 0m0.618s 00:10:19.966 02:16:10 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.966 02:16:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.966 ************************************ 00:10:19.966 END TEST spdkcli_tcp 00:10:19.966 ************************************ 00:10:19.966 02:16:10 -- common/autotest_common.sh@1142 -- # return 0 00:10:19.966 02:16:10 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:19.966 02:16:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:19.966 02:16:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.966 02:16:10 -- common/autotest_common.sh@10 -- # set +x 00:10:19.966 ************************************ 00:10:19.966 START TEST dpdk_mem_utility 00:10:19.966 ************************************ 00:10:19.966 02:16:10 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:20.226 * Looking for test storage... 00:10:20.226 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:10:20.226 02:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:20.226 02:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1859460 00:10:20.226 02:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:10:20.226 02:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1859460 00:10:20.226 02:16:10 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1859460 ']' 00:10:20.226 02:16:10 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.226 02:16:10 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.226 02:16:10 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.226 02:16:10 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.226 02:16:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:20.226 [2024-07-11 02:16:10.566432] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:20.226 [2024-07-11 02:16:10.566499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859460 ] 00:10:20.486 [2024-07-11 02:16:10.702699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.486 [2024-07-11 02:16:10.750233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.055 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.055 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:10:21.055 02:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:21.055 02:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:21.055 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.055 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:21.055 { 00:10:21.055 "filename": "/tmp/spdk_mem_dump.txt" 00:10:21.055 } 00:10:21.055 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.055 02:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:21.318 DPDK memory size 816.000000 MiB in 2 heap(s) 00:10:21.318 2 heaps totaling size 816.000000 MiB 00:10:21.318 size: 814.000000 MiB heap id: 0 00:10:21.318 size: 2.000000 MiB heap id: 1 00:10:21.318 end heaps---------- 00:10:21.318 8 mempools totaling size 598.116089 MiB 00:10:21.318 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:21.318 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:21.318 size: 84.521057 MiB name: bdev_io_1859460 00:10:21.318 size: 51.011292 MiB name: evtpool_1859460 00:10:21.318 size: 50.003479 MiB name: msgpool_1859460 00:10:21.318 size: 21.763794 MiB name: PDU_Pool 00:10:21.318 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:21.318 size: 0.026123 MiB name: Session_Pool 00:10:21.318 end mempools------- 00:10:21.318 201 memzones totaling size 4.173645 MiB 00:10:21.319 size: 1.000366 MiB name: RG_ring_0_1859460 00:10:21.319 size: 1.000366 MiB name: RG_ring_1_1859460 00:10:21.319 size: 1.000366 MiB name: RG_ring_4_1859460 00:10:21.319 size: 1.000366 MiB name: RG_ring_5_1859460 00:10:21.319 size: 0.125366 MiB name: RG_ring_2_1859460 00:10:21.319 size: 0.015991 MiB name: RG_ring_3_1859460 00:10:21.319 size: 0.001282 MiB name: QAT_SYM_CAPA_GEN_1 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.0_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.1_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.2_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.3_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.4_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.5_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.6_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:01.7_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.0_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.1_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.2_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.3_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.4_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.5_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.6_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3d:02.7_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.0_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.1_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.2_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.3_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.4_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.5_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.6_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:01.7_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.0_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.1_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.2_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.3_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.4_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.5_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.6_qat 00:10:21.319 size: 0.000244 MiB name: 0000:3f:02.7_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.0_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.1_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.2_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.3_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.4_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.5_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.6_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:01.7_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.0_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.1_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.2_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.3_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.4_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.5_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.6_qat 00:10:21.319 size: 0.000244 MiB name: 0000:da:02.7_qat 00:10:21.319 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_0 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_0 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_1 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_2 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_1 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_3 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_4 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_2 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_5 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_6 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_3 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_7 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_8 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_4 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_9 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_10 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_5 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_11 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_12 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_6 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_13 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_14 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_7 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_15 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_16 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_8 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_17 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_18 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_9 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_19 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_20 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_10 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_21 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_22 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_11 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_23 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_24 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_12 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_25 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_26 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_13 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_27 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_28 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_14 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_29 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_30 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_15 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_31 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_32 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_16 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_33 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_34 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_17 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_35 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_36 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_18 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_37 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_38 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_19 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_39 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_40 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_20 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_41 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_42 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_21 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_43 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_44 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_22 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_45 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_46 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_23 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_47 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_48 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_24 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_49 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_50 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_25 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_51 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_52 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_26 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_53 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_54 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_27 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_55 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_56 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_28 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_57 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_58 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_29 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_59 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_60 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_30 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_61 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_62 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_31 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_63 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_64 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_32 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_65 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_66 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_33 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_67 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_68 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_34 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_69 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_70 00:10:21.319 size: 0.000122 MiB name: rte_compressdev_data_35 00:10:21.319 size: 0.000122 MiB name: rte_cryptodev_data_71 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_72 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_36 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_73 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_74 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_37 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_75 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_76 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_38 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_77 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_78 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_39 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_79 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_80 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_40 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_81 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_82 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_41 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_83 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_84 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_42 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_85 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_86 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_43 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_87 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_88 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_44 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_89 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_90 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_45 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_91 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_92 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_46 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_93 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_94 00:10:21.320 size: 0.000122 MiB name: rte_compressdev_data_47 00:10:21.320 size: 0.000122 MiB name: rte_cryptodev_data_95 00:10:21.320 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:10:21.320 end memzones------- 00:10:21.320 02:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:21.320 heap id: 0 total size: 814.000000 MiB number of busy elements: 584 number of free elements: 14 00:10:21.320 list of free elements. size: 11.808594 MiB 00:10:21.320 element at address: 0x200000400000 with size: 1.999512 MiB 00:10:21.320 element at address: 0x200018e00000 with size: 0.999878 MiB 00:10:21.320 element at address: 0x200019000000 with size: 0.999878 MiB 00:10:21.320 element at address: 0x200003e00000 with size: 0.996460 MiB 00:10:21.320 element at address: 0x200031c00000 with size: 0.994446 MiB 00:10:21.320 element at address: 0x200013800000 with size: 0.978882 MiB 00:10:21.320 element at address: 0x200007000000 with size: 0.960022 MiB 00:10:21.320 element at address: 0x200019200000 with size: 0.937256 MiB 00:10:21.320 element at address: 0x20001aa00000 with size: 0.583252 MiB 00:10:21.320 element at address: 0x200003a00000 with size: 0.498535 MiB 00:10:21.320 element at address: 0x20000b200000 with size: 0.491272 MiB 00:10:21.320 element at address: 0x200000800000 with size: 0.486145 MiB 00:10:21.320 element at address: 0x200019400000 with size: 0.485840 MiB 00:10:21.320 element at address: 0x200027e00000 with size: 0.397217 MiB 00:10:21.320 list of standard malloc elements. size: 199.885925 MiB 00:10:21.320 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:10:21.320 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:10:21.320 element at address: 0x200018efff80 with size: 1.000122 MiB 00:10:21.320 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:10:21.320 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:21.320 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:21.320 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:10:21.320 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:21.320 element at address: 0x200000332280 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000335780 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000338c80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000033c180 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000033f680 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000342b80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000346080 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000349580 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000034ca80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000034ff80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000353480 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000356980 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000359e80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000035d380 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000360880 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000363d80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003677c0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000036b200 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000036ec40 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000372680 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003760c0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000379b00 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000037d540 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000380f80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003849c0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000388400 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000038be40 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000038f880 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003932c0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000396d00 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000039a740 with size: 0.004333 MiB 00:10:21.320 element at address: 0x20000039e180 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003a1bc0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003a5600 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003a9040 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003aca80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003b04c0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003b3f00 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003b7940 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003bb380 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003bedc0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003c2800 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003c6240 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003c9c80 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003cd6c0 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003d1100 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003d4b40 with size: 0.004333 MiB 00:10:21.320 element at address: 0x2000003d8d40 with size: 0.004333 MiB 00:10:21.320 element at address: 0x200000330180 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000331200 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000333680 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000334700 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000336b80 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000337c00 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000033a080 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000033b100 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000033d580 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000033e600 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000340a80 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000341b00 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000343f80 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000345000 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000347480 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000348500 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000034a980 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000034ba00 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000034de80 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000034ef00 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000351380 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000352400 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000354880 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000355900 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000357d80 with size: 0.004028 MiB 00:10:21.320 element at address: 0x200000358e00 with size: 0.004028 MiB 00:10:21.320 element at address: 0x20000035b280 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000035c300 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000035e780 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000035f800 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000361c80 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000362d00 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003656c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000366740 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000369100 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000036a180 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000036cb40 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000036dbc0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000370580 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000371600 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000373fc0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000375040 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000377a00 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000378a80 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000037b440 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000037c4c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000037ee80 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000037ff00 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003828c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000383940 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000386300 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000387380 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000389d40 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000038adc0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000038d780 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000038e800 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003911c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000392240 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000394c00 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000395c80 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000398640 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003996c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000039c080 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000039d100 with size: 0.004028 MiB 00:10:21.321 element at address: 0x20000039fac0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003a0b40 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003a3500 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003a4580 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003a6f40 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003a7fc0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003aa980 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003aba00 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003ae3c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003af440 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003b1e00 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003b2e80 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003b5840 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003b68c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003b9280 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003ba300 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003bccc0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003bdd40 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003c0700 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003c1780 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003c4140 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003c51c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003c7b80 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003c8c00 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003cb5c0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003cc640 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003cf000 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003d0080 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003d2a40 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003d3ac0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003d6c40 with size: 0.004028 MiB 00:10:21.321 element at address: 0x2000003d7cc0 with size: 0.004028 MiB 00:10:21.321 element at address: 0x200000205e00 with size: 0.000305 MiB 00:10:21.321 element at address: 0x200000200000 with size: 0.000244 MiB 00:10:21.321 element at address: 0x200000200100 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002001c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200280 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200340 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200400 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002004c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200580 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200640 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200700 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002007c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200880 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200940 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200a00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200ac0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200b80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200c40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200d00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200dc0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200e80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000200f40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201000 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002010c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201180 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201240 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201300 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002013c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201480 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201540 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201600 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002016c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201780 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201840 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201900 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002019c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201a80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201b40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201c00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201cc0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201d80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201e40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201f00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000201fc0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202080 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202140 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202200 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002022c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202380 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202440 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202500 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002025c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202680 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202740 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202800 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002028c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202980 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202a40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202b00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202bc0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202c80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202d40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202e00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202ec0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000202f80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203040 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203100 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002031c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203280 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203340 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203400 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002034c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203580 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203640 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203700 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002037c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203880 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203940 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203a00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203ac0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203b80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203c40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203d00 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203dc0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203e80 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000203f40 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000204000 with size: 0.000183 MiB 00:10:21.321 element at address: 0x2000002040c0 with size: 0.000183 MiB 00:10:21.321 element at address: 0x200000204180 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204240 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204300 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002043c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204480 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204540 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204600 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002046c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204780 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204840 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204900 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002049c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204a80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204b40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204c00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204cc0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204d80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204e40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204f00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000204fc0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205080 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205140 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205200 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002052c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205380 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205440 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205500 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002055c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205680 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205740 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205800 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002058c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205980 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205a40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205b00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205bc0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205c80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205d40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000205f40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206000 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002060c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206180 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206240 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206300 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002063c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206480 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206540 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206600 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002066c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206780 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206840 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206900 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002069c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206a80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206b40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206c00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206cc0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206d80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206e40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206f00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000206fc0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207080 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207140 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207200 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002072c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207380 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207440 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207500 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002075c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207680 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207740 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207800 with size: 0.000183 MiB 00:10:21.322 element at address: 0x2000002078c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207980 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000207b80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000020be40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c100 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c1c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c280 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c340 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c400 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c4c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c580 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c640 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c700 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c7c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c880 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022c940 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022ca00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022cac0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022cb80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022cc40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022cd00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022cdc0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022ce80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d080 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d140 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d200 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d2c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d380 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d440 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d500 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d5c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d680 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d740 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d800 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d8c0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022d980 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022da40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022db00 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022dbc0 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000022dc80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000032fe80 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000032ff40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000333440 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000336940 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000339e40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000033d340 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000340840 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000343d40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000347240 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000034a740 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000034dc40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000351140 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000354640 with size: 0.000183 MiB 00:10:21.322 element at address: 0x200000357b40 with size: 0.000183 MiB 00:10:21.322 element at address: 0x20000035b040 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000035e540 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000361a40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000364f40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000365100 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003652c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000365380 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000368980 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000368b40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000368d00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000368dc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000036c3c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000036c580 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000036c740 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000036c800 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000036fe00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000036ffc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000370180 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000370240 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000373840 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000373a00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000373bc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000373c80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000377280 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000377440 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000377600 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003776c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037acc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037ae80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037b040 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037b100 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037e700 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037e8c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037ea80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000037eb40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000382140 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000382300 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003824c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000382580 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000385b80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000385d40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000385f00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000385fc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003895c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000389780 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000389940 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000389a00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000038d000 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000038d1c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000038d380 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000038d440 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000390a40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000390c00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000390dc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000390e80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000394480 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000394640 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000394800 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003948c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000397ec0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000398080 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000398240 with size: 0.000183 MiB 00:10:21.323 element at address: 0x200000398300 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039b900 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039bac0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039bc80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039bd40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039f340 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039f500 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039f6c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000039f780 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a2d80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a2f40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a3100 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a31c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a67c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a6980 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a6b40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003a6c00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003aa200 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003aa3c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003aa580 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003aa640 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003adc40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003ade00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003adfc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003ae080 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b1680 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b1840 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b1a00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b1ac0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b50c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b5280 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b5440 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b5500 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b8b00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b8cc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b8e80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003b8f40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003bc540 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003bc700 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003bc8c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003bc980 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003bff80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c0140 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c0300 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c03c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c3d40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c3e00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c7400 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c75c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c7780 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003c7840 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003cae40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003cb000 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003cb1c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003cb280 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003ce880 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003cea40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003cec00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003cecc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d22c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d2480 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d2640 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d2700 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d5e40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d60c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d6840 with size: 0.000183 MiB 00:10:21.323 element at address: 0x2000003d6900 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087c740 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087c800 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087c980 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:10:21.323 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e65b00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e65bc0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6c7c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:10:21.324 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:10:21.324 list of memzone associated elements. size: 602.305481 MiB 00:10:21.324 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:10:21.324 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:21.324 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:10:21.324 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:21.324 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:10:21.324 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1859460_0 00:10:21.324 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:10:21.324 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1859460_0 00:10:21.324 element at address: 0x200003fff380 with size: 48.003052 MiB 00:10:21.324 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1859460_0 00:10:21.324 element at address: 0x2000195be940 with size: 20.255554 MiB 00:10:21.324 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:21.324 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:10:21.324 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:21.324 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:10:21.324 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1859460 00:10:21.324 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:10:21.324 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1859460 00:10:21.324 element at address: 0x20000022dd40 with size: 1.008118 MiB 00:10:21.324 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1859460 00:10:21.324 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:10:21.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:21.324 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:10:21.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:21.324 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:10:21.325 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:21.325 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:10:21.325 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:21.325 element at address: 0x200003eff180 with size: 1.000488 MiB 00:10:21.325 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1859460 00:10:21.325 element at address: 0x200003affc00 with size: 1.000488 MiB 00:10:21.325 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1859460 00:10:21.325 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:10:21.325 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1859460 00:10:21.325 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:10:21.325 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1859460 00:10:21.325 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:10:21.325 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1859460 00:10:21.325 element at address: 0x20000b27dc40 with size: 0.500488 MiB 00:10:21.325 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:21.325 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:10:21.325 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:21.325 element at address: 0x20001947c600 with size: 0.250488 MiB 00:10:21.325 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:21.325 element at address: 0x20000020bf00 with size: 0.125488 MiB 00:10:21.325 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1859460 00:10:21.325 element at address: 0x2000070f5c40 with size: 0.031738 MiB 00:10:21.325 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:21.325 element at address: 0x200027e65c80 with size: 0.023743 MiB 00:10:21.325 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:21.325 element at address: 0x200000207c40 with size: 0.016113 MiB 00:10:21.325 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1859460 00:10:21.325 element at address: 0x200027e6bdc0 with size: 0.002441 MiB 00:10:21.325 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:21.325 element at address: 0x2000003d6280 with size: 0.001404 MiB 00:10:21.325 associated memzone info: size: 0.001282 MiB name: QAT_SYM_CAPA_GEN_1 00:10:21.325 element at address: 0x2000003d6ac0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.0_qat 00:10:21.325 element at address: 0x2000003d28c0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.1_qat 00:10:21.325 element at address: 0x2000003cee80 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.2_qat 00:10:21.325 element at address: 0x2000003cb440 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.3_qat 00:10:21.325 element at address: 0x2000003c7a00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.4_qat 00:10:21.325 element at address: 0x2000003c3fc0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.5_qat 00:10:21.325 element at address: 0x2000003c0580 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.6_qat 00:10:21.325 element at address: 0x2000003bcb40 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.7_qat 00:10:21.325 element at address: 0x2000003b9100 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.0_qat 00:10:21.325 element at address: 0x2000003b56c0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.1_qat 00:10:21.325 element at address: 0x2000003b1c80 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.2_qat 00:10:21.325 element at address: 0x2000003ae240 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.3_qat 00:10:21.325 element at address: 0x2000003aa800 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.4_qat 00:10:21.325 element at address: 0x2000003a6dc0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.5_qat 00:10:21.325 element at address: 0x2000003a3380 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.6_qat 00:10:21.325 element at address: 0x20000039f940 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.7_qat 00:10:21.325 element at address: 0x20000039bf00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.0_qat 00:10:21.325 element at address: 0x2000003984c0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.1_qat 00:10:21.325 element at address: 0x200000394a80 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.2_qat 00:10:21.325 element at address: 0x200000391040 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.3_qat 00:10:21.325 element at address: 0x20000038d600 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.4_qat 00:10:21.325 element at address: 0x200000389bc0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.5_qat 00:10:21.325 element at address: 0x200000386180 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.6_qat 00:10:21.325 element at address: 0x200000382740 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.7_qat 00:10:21.325 element at address: 0x20000037ed00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.0_qat 00:10:21.325 element at address: 0x20000037b2c0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.1_qat 00:10:21.325 element at address: 0x200000377880 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.2_qat 00:10:21.325 element at address: 0x200000373e40 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.3_qat 00:10:21.325 element at address: 0x200000370400 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.4_qat 00:10:21.325 element at address: 0x20000036c9c0 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.5_qat 00:10:21.325 element at address: 0x200000368f80 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.6_qat 00:10:21.325 element at address: 0x200000365540 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.7_qat 00:10:21.325 element at address: 0x200000361b00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.0_qat 00:10:21.325 element at address: 0x20000035e600 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.1_qat 00:10:21.325 element at address: 0x20000035b100 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.2_qat 00:10:21.325 element at address: 0x200000357c00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.3_qat 00:10:21.325 element at address: 0x200000354700 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.4_qat 00:10:21.325 element at address: 0x200000351200 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.5_qat 00:10:21.325 element at address: 0x20000034dd00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.6_qat 00:10:21.325 element at address: 0x20000034a800 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:01.7_qat 00:10:21.325 element at address: 0x200000347300 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.0_qat 00:10:21.325 element at address: 0x200000343e00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.1_qat 00:10:21.325 element at address: 0x200000340900 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.2_qat 00:10:21.325 element at address: 0x20000033d400 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.3_qat 00:10:21.325 element at address: 0x200000339f00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.4_qat 00:10:21.325 element at address: 0x200000336a00 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.5_qat 00:10:21.325 element at address: 0x200000333500 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.6_qat 00:10:21.325 element at address: 0x200000330000 with size: 0.000366 MiB 00:10:21.325 associated memzone info: size: 0.000244 MiB name: 0000:da:02.7_qat 00:10:21.325 element at address: 0x2000003d5d00 with size: 0.000305 MiB 00:10:21.325 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:10:21.325 element at address: 0x20000022cf40 with size: 0.000305 MiB 00:10:21.325 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1859460 00:10:21.325 element at address: 0x200000207a40 with size: 0.000305 MiB 00:10:21.325 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1859460 00:10:21.325 element at address: 0x200027e6c880 with size: 0.000305 MiB 00:10:21.325 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:21.325 element at address: 0x2000003d69c0 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:10:21.325 element at address: 0x2000003d6180 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:10:21.325 element at address: 0x2000003d5f00 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:10:21.325 element at address: 0x2000003d27c0 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:10:21.325 element at address: 0x2000003d2540 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:10:21.325 element at address: 0x2000003d2380 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:10:21.325 element at address: 0x2000003ced80 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:10:21.325 element at address: 0x2000003ceb00 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:10:21.325 element at address: 0x2000003ce940 with size: 0.000244 MiB 00:10:21.325 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:10:21.326 element at address: 0x2000003cb340 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:10:21.326 element at address: 0x2000003cb0c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:10:21.326 element at address: 0x2000003caf00 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:10:21.326 element at address: 0x2000003c7900 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:10:21.326 element at address: 0x2000003c7680 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:10:21.326 element at address: 0x2000003c74c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:10:21.326 element at address: 0x2000003c3ec0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:10:21.326 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:10:21.326 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:10:21.326 element at address: 0x2000003c0480 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:10:21.326 element at address: 0x2000003c0200 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:10:21.326 element at address: 0x2000003c0040 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:10:21.326 element at address: 0x2000003bca40 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:10:21.326 element at address: 0x2000003bc7c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:10:21.326 element at address: 0x2000003bc600 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:10:21.326 element at address: 0x2000003b9000 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:10:21.326 element at address: 0x2000003b8d80 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:10:21.326 element at address: 0x2000003b8bc0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:10:21.326 element at address: 0x2000003b55c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:10:21.326 element at address: 0x2000003b5340 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:10:21.326 element at address: 0x2000003b5180 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:10:21.326 element at address: 0x2000003b1b80 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:10:21.326 element at address: 0x2000003b1900 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:10:21.326 element at address: 0x2000003b1740 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:10:21.326 element at address: 0x2000003ae140 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:10:21.326 element at address: 0x2000003adec0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:10:21.326 element at address: 0x2000003add00 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:10:21.326 element at address: 0x2000003aa700 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:10:21.326 element at address: 0x2000003aa480 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:10:21.326 element at address: 0x2000003aa2c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:10:21.326 element at address: 0x2000003a6cc0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:10:21.326 element at address: 0x2000003a6a40 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:10:21.326 element at address: 0x2000003a6880 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:10:21.326 element at address: 0x2000003a3280 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:10:21.326 element at address: 0x2000003a3000 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:10:21.326 element at address: 0x2000003a2e40 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:10:21.326 element at address: 0x20000039f840 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:10:21.326 element at address: 0x20000039f5c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:10:21.326 element at address: 0x20000039f400 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:10:21.326 element at address: 0x20000039be00 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:10:21.326 element at address: 0x20000039bb80 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:10:21.326 element at address: 0x20000039b9c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:10:21.326 element at address: 0x2000003983c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:10:21.326 element at address: 0x200000398140 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:10:21.326 element at address: 0x200000397f80 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:10:21.326 element at address: 0x200000394980 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:10:21.326 element at address: 0x200000394700 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:10:21.326 element at address: 0x200000394540 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:10:21.326 element at address: 0x200000390f40 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:10:21.326 element at address: 0x200000390cc0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:10:21.326 element at address: 0x200000390b00 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:10:21.326 element at address: 0x20000038d500 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:10:21.326 element at address: 0x20000038d280 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:10:21.326 element at address: 0x20000038d0c0 with size: 0.000244 MiB 00:10:21.326 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:10:21.327 element at address: 0x200000389ac0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:10:21.327 element at address: 0x200000389840 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:10:21.327 element at address: 0x200000389680 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:10:21.327 element at address: 0x200000386080 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:10:21.327 element at address: 0x200000385e00 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:10:21.327 element at address: 0x200000385c40 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:10:21.327 element at address: 0x200000382640 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:10:21.327 element at address: 0x2000003823c0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:10:21.327 element at address: 0x200000382200 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:10:21.327 element at address: 0x20000037ec00 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:10:21.327 element at address: 0x20000037e980 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:10:21.327 element at address: 0x20000037e7c0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:10:21.327 element at address: 0x20000037b1c0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:10:21.327 element at address: 0x20000037af40 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:10:21.327 element at address: 0x20000037ad80 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:10:21.327 element at address: 0x200000377780 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:10:21.327 element at address: 0x200000377500 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:10:21.327 element at address: 0x200000377340 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:10:21.327 element at address: 0x200000373d40 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:10:21.327 element at address: 0x200000373ac0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:10:21.327 element at address: 0x200000373900 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:10:21.327 element at address: 0x200000370300 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:10:21.327 element at address: 0x200000370080 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:10:21.327 element at address: 0x20000036fec0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:10:21.327 element at address: 0x20000036c8c0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:10:21.327 element at address: 0x20000036c640 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:10:21.327 element at address: 0x20000036c480 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:10:21.327 element at address: 0x200000368e80 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:10:21.327 element at address: 0x200000368c00 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:10:21.327 element at address: 0x200000368a40 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:10:21.327 element at address: 0x200000365440 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:10:21.327 element at address: 0x2000003651c0 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:10:21.327 element at address: 0x200000365000 with size: 0.000244 MiB 00:10:21.327 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:10:21.327 element at address: 0x2000003d6000 with size: 0.000183 MiB 00:10:21.327 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:10:21.327 02:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:21.327 02:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1859460 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1859460 ']' 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1859460 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859460 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859460' 00:10:21.327 killing process with pid 1859460 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1859460 00:10:21.327 02:16:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1859460 00:10:21.897 00:10:21.897 real 0m1.683s 00:10:21.897 user 0m1.786s 00:10:21.897 sys 0m0.554s 00:10:21.897 02:16:12 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.897 02:16:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:21.897 ************************************ 00:10:21.897 END TEST dpdk_mem_utility 00:10:21.897 ************************************ 00:10:21.897 02:16:12 -- common/autotest_common.sh@1142 -- # return 0 00:10:21.897 02:16:12 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:10:21.897 02:16:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:21.897 02:16:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.897 02:16:12 -- common/autotest_common.sh@10 -- # set +x 00:10:21.897 ************************************ 00:10:21.897 START TEST event 00:10:21.897 ************************************ 00:10:21.897 02:16:12 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:10:21.897 * Looking for test storage... 00:10:21.897 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:10:21.897 02:16:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:21.897 02:16:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:21.897 02:16:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:21.897 02:16:12 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:21.897 02:16:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.897 02:16:12 event -- common/autotest_common.sh@10 -- # set +x 00:10:21.897 ************************************ 00:10:21.897 START TEST event_perf 00:10:21.897 ************************************ 00:10:21.897 02:16:12 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:22.156 Running I/O for 1 seconds...[2024-07-11 02:16:12.335276] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:22.157 [2024-07-11 02:16:12.335352] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859758 ] 00:10:22.157 [2024-07-11 02:16:12.468656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.157 [2024-07-11 02:16:12.521133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.157 [2024-07-11 02:16:12.521234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.157 [2024-07-11 02:16:12.521337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.157 [2024-07-11 02:16:12.521336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.534 Running I/O for 1 seconds... 00:10:23.534 lcore 0: 105345 00:10:23.534 lcore 1: 105348 00:10:23.534 lcore 2: 105350 00:10:23.534 lcore 3: 105349 00:10:23.534 done. 00:10:23.534 00:10:23.534 real 0m1.290s 00:10:23.534 user 0m4.133s 00:10:23.534 sys 0m0.144s 00:10:23.534 02:16:13 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.534 02:16:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:23.534 ************************************ 00:10:23.534 END TEST event_perf 00:10:23.534 ************************************ 00:10:23.534 02:16:13 event -- common/autotest_common.sh@1142 -- # return 0 00:10:23.534 02:16:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:23.534 02:16:13 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:23.534 02:16:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.534 02:16:13 event -- common/autotest_common.sh@10 -- # set +x 00:10:23.534 ************************************ 00:10:23.534 START TEST event_reactor 00:10:23.534 ************************************ 00:10:23.534 02:16:13 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:23.535 [2024-07-11 02:16:13.707447] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:23.535 [2024-07-11 02:16:13.707518] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859956 ] 00:10:23.535 [2024-07-11 02:16:13.840896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.535 [2024-07-11 02:16:13.891842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.915 test_start 00:10:24.915 oneshot 00:10:24.915 tick 100 00:10:24.915 tick 100 00:10:24.915 tick 250 00:10:24.915 tick 100 00:10:24.915 tick 100 00:10:24.915 tick 250 00:10:24.915 tick 100 00:10:24.915 tick 500 00:10:24.915 tick 100 00:10:24.915 tick 100 00:10:24.915 tick 250 00:10:24.915 tick 100 00:10:24.915 tick 100 00:10:24.915 test_end 00:10:24.915 00:10:24.915 real 0m1.285s 00:10:24.915 user 0m1.138s 00:10:24.915 sys 0m0.140s 00:10:24.915 02:16:14 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.915 02:16:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:24.915 ************************************ 00:10:24.915 END TEST event_reactor 00:10:24.915 ************************************ 00:10:24.915 02:16:15 event -- common/autotest_common.sh@1142 -- # return 0 00:10:24.915 02:16:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:24.915 02:16:15 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:24.915 02:16:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.915 02:16:15 event -- common/autotest_common.sh@10 -- # set +x 00:10:24.915 ************************************ 00:10:24.915 START TEST event_reactor_perf 00:10:24.915 ************************************ 00:10:24.915 02:16:15 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:24.915 [2024-07-11 02:16:15.083251] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:24.915 [2024-07-11 02:16:15.083312] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860154 ] 00:10:24.915 [2024-07-11 02:16:15.218630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.915 [2024-07-11 02:16:15.269781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.295 test_start 00:10:26.295 test_end 00:10:26.295 Performance: 328453 events per second 00:10:26.295 00:10:26.295 real 0m1.290s 00:10:26.295 user 0m1.129s 00:10:26.295 sys 0m0.154s 00:10:26.295 02:16:16 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.295 02:16:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:26.295 ************************************ 00:10:26.295 END TEST event_reactor_perf 00:10:26.295 ************************************ 00:10:26.295 02:16:16 event -- common/autotest_common.sh@1142 -- # return 0 00:10:26.295 02:16:16 event -- event/event.sh@49 -- # uname -s 00:10:26.296 02:16:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:26.296 02:16:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:26.296 02:16:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.296 02:16:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.296 02:16:16 event -- common/autotest_common.sh@10 -- # set +x 00:10:26.296 ************************************ 00:10:26.296 START TEST event_scheduler 00:10:26.296 ************************************ 00:10:26.296 02:16:16 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:26.296 * Looking for test storage... 00:10:26.296 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:10:26.296 02:16:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:26.296 02:16:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1860381 00:10:26.296 02:16:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:26.296 02:16:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:26.296 02:16:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1860381 00:10:26.296 02:16:16 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1860381 ']' 00:10:26.296 02:16:16 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.296 02:16:16 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.296 02:16:16 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.296 02:16:16 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.296 02:16:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:26.296 [2024-07-11 02:16:16.611268] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:26.296 [2024-07-11 02:16:16.611340] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860381 ] 00:10:26.555 [2024-07-11 02:16:16.820022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.555 [2024-07-11 02:16:16.909607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.555 [2024-07-11 02:16:16.909715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.555 [2024-07-11 02:16:16.909823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.555 [2024-07-11 02:16:16.909833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:10:27.493 02:16:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:27.493 [2024-07-11 02:16:17.613534] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:27.493 [2024-07-11 02:16:17.613588] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:27.493 [2024-07-11 02:16:17.613621] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:27.493 [2024-07-11 02:16:17.613646] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:27.493 [2024-07-11 02:16:17.613670] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.493 02:16:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:27.493 [2024-07-11 02:16:17.735532] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.493 02:16:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.493 02:16:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 ************************************ 00:10:27.494 START TEST scheduler_create_thread 00:10:27.494 ************************************ 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 2 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 3 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 4 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 5 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 6 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 7 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 8 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 9 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 10 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.494 02:16:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.062 02:16:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.062 02:16:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:28.062 02:16:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:28.062 02:16:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.062 02:16:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.003 02:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.003 02:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:29.003 02:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.003 02:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.941 02:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.941 02:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:29.941 02:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:29.941 02:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.941 02:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.880 02:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.880 00:10:30.880 real 0m3.233s 00:10:30.880 user 0m0.025s 00:10:30.880 sys 0m0.007s 00:10:30.880 02:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.880 02:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:30.880 ************************************ 00:10:30.880 END TEST scheduler_create_thread 00:10:30.880 ************************************ 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:10:30.880 02:16:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:30.880 02:16:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1860381 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1860381 ']' 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1860381 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1860381 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1860381' 00:10:30.880 killing process with pid 1860381 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1860381 00:10:30.880 02:16:21 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1860381 00:10:31.140 [2024-07-11 02:16:21.394731] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:31.398 00:10:31.398 real 0m5.303s 00:10:31.398 user 0m10.525s 00:10:31.398 sys 0m0.659s 00:10:31.398 02:16:21 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.398 02:16:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:31.398 ************************************ 00:10:31.399 END TEST event_scheduler 00:10:31.399 ************************************ 00:10:31.399 02:16:21 event -- common/autotest_common.sh@1142 -- # return 0 00:10:31.399 02:16:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:31.399 02:16:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:31.399 02:16:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.399 02:16:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.399 02:16:21 event -- common/autotest_common.sh@10 -- # set +x 00:10:31.658 ************************************ 00:10:31.658 START TEST app_repeat 00:10:31.658 ************************************ 00:10:31.658 02:16:21 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1861130 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1861130' 00:10:31.658 Process app_repeat pid: 1861130 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:31.658 spdk_app_start Round 0 00:10:31.658 02:16:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1861130 /var/tmp/spdk-nbd.sock 00:10:31.658 02:16:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1861130 ']' 00:10:31.658 02:16:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:31.658 02:16:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.658 02:16:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:31.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:31.658 02:16:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.658 02:16:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:31.658 [2024-07-11 02:16:21.878194] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:31.658 [2024-07-11 02:16:21.878258] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861130 ] 00:10:31.658 [2024-07-11 02:16:22.012182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:31.658 [2024-07-11 02:16:22.064010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.658 [2024-07-11 02:16:22.064016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.918 02:16:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.918 02:16:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:31.918 02:16:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:32.178 Malloc0 00:10:32.178 02:16:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:32.438 Malloc1 00:10:32.438 02:16:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:32.438 02:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:32.697 /dev/nbd0 00:10:32.697 02:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:32.697 02:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:32.697 02:16:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:32.697 02:16:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:32.697 02:16:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:32.698 1+0 records in 00:10:32.698 1+0 records out 00:10:32.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394638 s, 10.4 MB/s 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:32.698 02:16:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:32.698 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:32.698 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:32.698 02:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:32.957 /dev/nbd1 00:10:32.957 02:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:32.957 02:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:32.957 1+0 records in 00:10:32.957 1+0 records out 00:10:32.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268812 s, 15.2 MB/s 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:32.957 02:16:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:32.957 02:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:32.957 02:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:32.957 02:16:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:32.957 02:16:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.957 02:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:33.217 { 00:10:33.217 "nbd_device": "/dev/nbd0", 00:10:33.217 "bdev_name": "Malloc0" 00:10:33.217 }, 00:10:33.217 { 00:10:33.217 "nbd_device": "/dev/nbd1", 00:10:33.217 "bdev_name": "Malloc1" 00:10:33.217 } 00:10:33.217 ]' 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:33.217 { 00:10:33.217 "nbd_device": "/dev/nbd0", 00:10:33.217 "bdev_name": "Malloc0" 00:10:33.217 }, 00:10:33.217 { 00:10:33.217 "nbd_device": "/dev/nbd1", 00:10:33.217 "bdev_name": "Malloc1" 00:10:33.217 } 00:10:33.217 ]' 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:33.217 /dev/nbd1' 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:33.217 /dev/nbd1' 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:33.217 256+0 records in 00:10:33.217 256+0 records out 00:10:33.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101017 s, 104 MB/s 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:33.217 256+0 records in 00:10:33.217 256+0 records out 00:10:33.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298224 s, 35.2 MB/s 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.217 02:16:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:33.476 256+0 records in 00:10:33.476 256+0 records out 00:10:33.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312684 s, 33.5 MB/s 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.476 02:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.734 02:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.993 02:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:34.251 02:16:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:34.251 02:16:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:34.509 02:16:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:34.767 [2024-07-11 02:16:24.988344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:34.767 [2024-07-11 02:16:25.039088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.767 [2024-07-11 02:16:25.039095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.767 [2024-07-11 02:16:25.091035] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:34.767 [2024-07-11 02:16:25.091081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:37.386 02:16:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:37.386 02:16:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:37.386 spdk_app_start Round 1 00:10:37.386 02:16:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1861130 /var/tmp/spdk-nbd.sock 00:10:37.386 02:16:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1861130 ']' 00:10:37.386 02:16:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:37.386 02:16:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.386 02:16:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:37.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:37.386 02:16:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.386 02:16:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:37.645 02:16:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.645 02:16:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:37.645 02:16:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:37.904 Malloc0 00:10:37.904 02:16:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:38.163 Malloc1 00:10:38.163 02:16:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.163 02:16:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:38.423 /dev/nbd0 00:10:38.423 02:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:38.423 02:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:38.423 1+0 records in 00:10:38.423 1+0 records out 00:10:38.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223395 s, 18.3 MB/s 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:38.423 02:16:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:38.423 02:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.423 02:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.423 02:16:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:38.682 /dev/nbd1 00:10:38.682 02:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:38.682 02:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:38.682 1+0 records in 00:10:38.682 1+0 records out 00:10:38.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249515 s, 16.4 MB/s 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:38.682 02:16:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:38.682 02:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.682 02:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.682 02:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:38.682 02:16:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.682 02:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:38.941 { 00:10:38.941 "nbd_device": "/dev/nbd0", 00:10:38.941 "bdev_name": "Malloc0" 00:10:38.941 }, 00:10:38.941 { 00:10:38.941 "nbd_device": "/dev/nbd1", 00:10:38.941 "bdev_name": "Malloc1" 00:10:38.941 } 00:10:38.941 ]' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:38.941 { 00:10:38.941 "nbd_device": "/dev/nbd0", 00:10:38.941 "bdev_name": "Malloc0" 00:10:38.941 }, 00:10:38.941 { 00:10:38.941 "nbd_device": "/dev/nbd1", 00:10:38.941 "bdev_name": "Malloc1" 00:10:38.941 } 00:10:38.941 ]' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:38.941 /dev/nbd1' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:38.941 /dev/nbd1' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:38.941 256+0 records in 00:10:38.941 256+0 records out 00:10:38.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508082 s, 206 MB/s 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:38.941 256+0 records in 00:10:38.941 256+0 records out 00:10:38.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029785 s, 35.2 MB/s 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:38.941 256+0 records in 00:10:38.941 256+0 records out 00:10:38.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029606 s, 35.4 MB/s 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:38.941 02:16:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:39.201 02:16:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:39.201 02:16:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.201 02:16:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:39.201 02:16:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:39.201 02:16:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:39.201 02:16:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.201 02:16:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.460 02:16:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.719 02:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:39.978 02:16:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:39.978 02:16:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:40.237 02:16:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:40.496 [2024-07-11 02:16:30.746849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:40.496 [2024-07-11 02:16:30.798652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.496 [2024-07-11 02:16:30.798657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.496 [2024-07-11 02:16:30.851984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:40.496 [2024-07-11 02:16:30.852030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:43.782 02:16:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:43.782 02:16:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:43.782 spdk_app_start Round 2 00:10:43.782 02:16:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1861130 /var/tmp/spdk-nbd.sock 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1861130 ']' 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:43.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.782 02:16:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:43.782 02:16:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:43.782 Malloc0 00:10:43.782 02:16:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:44.042 Malloc1 00:10:44.042 02:16:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:44.042 02:16:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.042 02:16:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:44.042 02:16:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.043 02:16:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:44.301 /dev/nbd0 00:10:44.301 02:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:44.301 02:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:44.301 1+0 records in 00:10:44.301 1+0 records out 00:10:44.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002508 s, 16.3 MB/s 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.301 02:16:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:44.301 02:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.301 02:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.301 02:16:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:44.560 /dev/nbd1 00:10:44.560 02:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:44.560 02:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:44.560 1+0 records in 00:10:44.560 1+0 records out 00:10:44.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259917 s, 15.8 MB/s 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.560 02:16:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:44.560 02:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.560 02:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.560 02:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:44.560 02:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.560 02:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:44.819 { 00:10:44.819 "nbd_device": "/dev/nbd0", 00:10:44.819 "bdev_name": "Malloc0" 00:10:44.819 }, 00:10:44.819 { 00:10:44.819 "nbd_device": "/dev/nbd1", 00:10:44.819 "bdev_name": "Malloc1" 00:10:44.819 } 00:10:44.819 ]' 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:44.819 { 00:10:44.819 "nbd_device": "/dev/nbd0", 00:10:44.819 "bdev_name": "Malloc0" 00:10:44.819 }, 00:10:44.819 { 00:10:44.819 "nbd_device": "/dev/nbd1", 00:10:44.819 "bdev_name": "Malloc1" 00:10:44.819 } 00:10:44.819 ]' 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:44.819 /dev/nbd1' 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:44.819 /dev/nbd1' 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:44.819 256+0 records in 00:10:44.819 256+0 records out 00:10:44.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103496 s, 101 MB/s 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:44.819 256+0 records in 00:10:44.819 256+0 records out 00:10:44.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300229 s, 34.9 MB/s 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.819 02:16:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:45.079 256+0 records in 00:10:45.079 256+0 records out 00:10:45.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031445 s, 33.3 MB/s 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.079 02:16:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.338 02:16:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.596 02:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:45.854 02:16:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:45.854 02:16:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:46.112 02:16:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:46.371 [2024-07-11 02:16:36.661424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.371 [2024-07-11 02:16:36.712230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.371 [2024-07-11 02:16:36.712237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.371 [2024-07-11 02:16:36.765195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:46.371 [2024-07-11 02:16:36.765242] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:49.658 02:16:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1861130 /var/tmp/spdk-nbd.sock 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1861130 ']' 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:49.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:49.658 02:16:39 event.app_repeat -- event/event.sh@39 -- # killprocess 1861130 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1861130 ']' 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1861130 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1861130 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:49.658 02:16:39 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:49.659 02:16:39 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1861130' 00:10:49.659 killing process with pid 1861130 00:10:49.659 02:16:39 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1861130 00:10:49.659 02:16:39 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1861130 00:10:49.659 spdk_app_start is called in Round 0. 00:10:49.659 Shutdown signal received, stop current app iteration 00:10:49.659 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 reinitialization... 00:10:49.659 spdk_app_start is called in Round 1. 00:10:49.659 Shutdown signal received, stop current app iteration 00:10:49.659 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 reinitialization... 00:10:49.659 spdk_app_start is called in Round 2. 00:10:49.659 Shutdown signal received, stop current app iteration 00:10:49.659 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 reinitialization... 00:10:49.659 spdk_app_start is called in Round 3. 00:10:49.659 Shutdown signal received, stop current app iteration 00:10:49.659 02:16:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:49.659 02:16:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:49.659 00:10:49.659 real 0m18.047s 00:10:49.659 user 0m39.133s 00:10:49.659 sys 0m3.912s 00:10:49.659 02:16:39 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.659 02:16:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:49.659 ************************************ 00:10:49.659 END TEST app_repeat 00:10:49.659 ************************************ 00:10:49.659 02:16:39 event -- common/autotest_common.sh@1142 -- # return 0 00:10:49.659 02:16:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:49.659 00:10:49.659 real 0m27.771s 00:10:49.659 user 0m56.241s 00:10:49.659 sys 0m5.425s 00:10:49.659 02:16:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.659 02:16:39 event -- common/autotest_common.sh@10 -- # set +x 00:10:49.659 ************************************ 00:10:49.659 END TEST event 00:10:49.659 ************************************ 00:10:49.659 02:16:39 -- common/autotest_common.sh@1142 -- # return 0 00:10:49.659 02:16:39 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:10:49.659 02:16:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:49.659 02:16:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.659 02:16:39 -- common/autotest_common.sh@10 -- # set +x 00:10:49.659 ************************************ 00:10:49.659 START TEST thread 00:10:49.659 ************************************ 00:10:49.659 02:16:40 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:10:49.917 * Looking for test storage... 00:10:49.917 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:10:49.917 02:16:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:49.917 02:16:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:49.917 02:16:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.917 02:16:40 thread -- common/autotest_common.sh@10 -- # set +x 00:10:49.917 ************************************ 00:10:49.917 START TEST thread_poller_perf 00:10:49.917 ************************************ 00:10:49.917 02:16:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:49.917 [2024-07-11 02:16:40.190568] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:49.917 [2024-07-11 02:16:40.190634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863712 ] 00:10:49.917 [2024-07-11 02:16:40.328510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.176 [2024-07-11 02:16:40.380835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.176 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:51.113 ====================================== 00:10:51.113 busy:2315214946 (cyc) 00:10:51.113 total_run_count: 267000 00:10:51.113 tsc_hz: 2300000000 (cyc) 00:10:51.113 ====================================== 00:10:51.113 poller_cost: 8671 (cyc), 3770 (nsec) 00:10:51.113 00:10:51.113 real 0m1.305s 00:10:51.113 user 0m1.149s 00:10:51.113 sys 0m0.150s 00:10:51.113 02:16:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.113 02:16:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:51.113 ************************************ 00:10:51.113 END TEST thread_poller_perf 00:10:51.113 ************************************ 00:10:51.113 02:16:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:51.113 02:16:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:51.113 02:16:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:51.113 02:16:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.113 02:16:41 thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.372 ************************************ 00:10:51.372 START TEST thread_poller_perf 00:10:51.372 ************************************ 00:10:51.372 02:16:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:51.372 [2024-07-11 02:16:41.582753] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:51.372 [2024-07-11 02:16:41.582824] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863933 ] 00:10:51.372 [2024-07-11 02:16:41.717022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.372 [2024-07-11 02:16:41.764106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.372 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:52.754 ====================================== 00:10:52.754 busy:2302578520 (cyc) 00:10:52.754 total_run_count: 3500000 00:10:52.754 tsc_hz: 2300000000 (cyc) 00:10:52.754 ====================================== 00:10:52.754 poller_cost: 657 (cyc), 285 (nsec) 00:10:52.754 00:10:52.754 real 0m1.285s 00:10:52.754 user 0m1.146s 00:10:52.754 sys 0m0.134s 00:10:52.754 02:16:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.754 02:16:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 ************************************ 00:10:52.754 END TEST thread_poller_perf 00:10:52.754 ************************************ 00:10:52.754 02:16:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:52.754 02:16:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:52.754 00:10:52.754 real 0m2.872s 00:10:52.754 user 0m2.397s 00:10:52.754 sys 0m0.488s 00:10:52.754 02:16:42 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.754 02:16:42 thread -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 ************************************ 00:10:52.754 END TEST thread 00:10:52.754 ************************************ 00:10:52.754 02:16:42 -- common/autotest_common.sh@1142 -- # return 0 00:10:52.754 02:16:42 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:10:52.754 02:16:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.754 02:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.754 02:16:42 -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 ************************************ 00:10:52.754 START TEST accel 00:10:52.754 ************************************ 00:10:52.754 02:16:42 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:10:52.754 * Looking for test storage... 00:10:52.754 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:10:52.754 02:16:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:10:52.754 02:16:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:10:52.754 02:16:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:52.754 02:16:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1864246 00:10:52.754 02:16:43 accel -- accel/accel.sh@63 -- # waitforlisten 1864246 00:10:52.754 02:16:43 accel -- common/autotest_common.sh@829 -- # '[' -z 1864246 ']' 00:10:52.754 02:16:43 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.754 02:16:43 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.754 02:16:43 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:52.754 02:16:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:52.754 02:16:43 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.754 02:16:43 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.754 02:16:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:52.754 02:16:43 accel -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 02:16:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:52.754 02:16:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.754 02:16:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.754 02:16:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:52.754 02:16:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:52.754 02:16:43 accel -- accel/accel.sh@41 -- # jq -r . 00:10:52.754 [2024-07-11 02:16:43.155443] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:52.754 [2024-07-11 02:16:43.155517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864246 ] 00:10:53.014 [2024-07-11 02:16:43.293566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.014 [2024-07-11 02:16:43.341323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.973 02:16:44 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.973 02:16:44 accel -- common/autotest_common.sh@862 -- # return 0 00:10:53.973 02:16:44 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:53.973 02:16:44 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:53.973 02:16:44 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:53.973 02:16:44 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:10:53.973 02:16:44 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:53.973 02:16:44 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:53.973 02:16:44 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:53.973 02:16:44 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.973 02:16:44 accel -- common/autotest_common.sh@10 -- # set +x 00:10:53.973 02:16:44 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.973 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.973 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.973 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.973 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.973 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.973 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.973 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.973 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.973 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.973 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.973 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.973 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.973 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # IFS== 00:10:53.974 02:16:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:53.974 02:16:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:53.974 02:16:44 accel -- accel/accel.sh@75 -- # killprocess 1864246 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@948 -- # '[' -z 1864246 ']' 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@952 -- # kill -0 1864246 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@953 -- # uname 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1864246 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1864246' 00:10:53.974 killing process with pid 1864246 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@967 -- # kill 1864246 00:10:53.974 02:16:44 accel -- common/autotest_common.sh@972 -- # wait 1864246 00:10:54.233 02:16:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:54.233 02:16:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:10:54.233 02:16:44 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.233 02:16:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.233 02:16:44 accel -- common/autotest_common.sh@10 -- # set +x 00:10:54.233 02:16:44 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:10:54.233 02:16:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:10:54.233 02:16:44 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.233 02:16:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:10:54.233 02:16:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:54.233 02:16:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:54.233 02:16:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:54.233 02:16:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.233 02:16:44 accel -- common/autotest_common.sh@10 -- # set +x 00:10:54.493 ************************************ 00:10:54.493 START TEST accel_missing_filename 00:10:54.493 ************************************ 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.493 02:16:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:10:54.493 02:16:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:10:54.493 [2024-07-11 02:16:44.712797] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:54.493 [2024-07-11 02:16:44.712859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864466 ] 00:10:54.493 [2024-07-11 02:16:44.847778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.493 [2024-07-11 02:16:44.901898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.753 [2024-07-11 02:16:44.970200] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:54.753 [2024-07-11 02:16:45.043669] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:10:54.753 A filename is required. 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.753 00:10:54.753 real 0m0.446s 00:10:54.753 user 0m0.280s 00:10:54.753 sys 0m0.200s 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.753 02:16:45 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:10:54.753 ************************************ 00:10:54.753 END TEST accel_missing_filename 00:10:54.753 ************************************ 00:10:54.753 02:16:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:54.753 02:16:45 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:54.753 02:16:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:54.753 02:16:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.753 02:16:45 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.013 ************************************ 00:10:55.013 START TEST accel_compress_verify 00:10:55.013 ************************************ 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.013 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:55.013 02:16:45 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:10:55.013 [2024-07-11 02:16:45.239396] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:55.013 [2024-07-11 02:16:45.239463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864487 ] 00:10:55.013 [2024-07-11 02:16:45.376611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.013 [2024-07-11 02:16:45.431139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.272 [2024-07-11 02:16:45.503814] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:55.272 [2024-07-11 02:16:45.577924] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:10:55.272 00:10:55.272 Compression does not support the verify option, aborting. 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.272 00:10:55.272 real 0m0.453s 00:10:55.272 user 0m0.264s 00:10:55.272 sys 0m0.213s 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.272 02:16:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:55.272 ************************************ 00:10:55.272 END TEST accel_compress_verify 00:10:55.272 ************************************ 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:55.533 02:16:45 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.533 ************************************ 00:10:55.533 START TEST accel_wrong_workload 00:10:55.533 ************************************ 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:10:55.533 02:16:45 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:10:55.533 Unsupported workload type: foobar 00:10:55.533 [2024-07-11 02:16:45.774473] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:55.533 accel_perf options: 00:10:55.533 [-h help message] 00:10:55.533 [-q queue depth per core] 00:10:55.533 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:55.533 [-T number of threads per core 00:10:55.533 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:55.533 [-t time in seconds] 00:10:55.533 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:55.533 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:55.533 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:55.533 [-l for compress/decompress workloads, name of uncompressed input file 00:10:55.533 [-S for crc32c workload, use this seed value (default 0) 00:10:55.533 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:55.533 [-f for fill workload, use this BYTE value (default 255) 00:10:55.533 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:55.533 [-y verify result if this switch is on] 00:10:55.533 [-a tasks to allocate per core (default: same value as -q)] 00:10:55.533 Can be used to spread operations across a wider range of memory. 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.533 00:10:55.533 real 0m0.042s 00:10:55.533 user 0m0.024s 00:10:55.533 sys 0m0.018s 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.533 02:16:45 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:55.533 ************************************ 00:10:55.533 END TEST accel_wrong_workload 00:10:55.533 ************************************ 00:10:55.533 Error: writing output failed: Broken pipe 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:55.533 02:16:45 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.533 ************************************ 00:10:55.533 START TEST accel_negative_buffers 00:10:55.533 ************************************ 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:10:55.533 02:16:45 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:10:55.533 -x option must be non-negative. 00:10:55.533 [2024-07-11 02:16:45.897360] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:55.533 accel_perf options: 00:10:55.533 [-h help message] 00:10:55.533 [-q queue depth per core] 00:10:55.533 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:55.533 [-T number of threads per core 00:10:55.533 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:55.533 [-t time in seconds] 00:10:55.533 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:55.533 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:55.533 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:55.533 [-l for compress/decompress workloads, name of uncompressed input file 00:10:55.533 [-S for crc32c workload, use this seed value (default 0) 00:10:55.533 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:55.533 [-f for fill workload, use this BYTE value (default 255) 00:10:55.533 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:55.533 [-y verify result if this switch is on] 00:10:55.533 [-a tasks to allocate per core (default: same value as -q)] 00:10:55.533 Can be used to spread operations across a wider range of memory. 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.533 00:10:55.533 real 0m0.044s 00:10:55.533 user 0m0.030s 00:10:55.533 sys 0m0.014s 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.533 02:16:45 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:55.533 ************************************ 00:10:55.533 END TEST accel_negative_buffers 00:10:55.533 ************************************ 00:10:55.533 Error: writing output failed: Broken pipe 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:55.533 02:16:45 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.533 02:16:45 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.794 ************************************ 00:10:55.794 START TEST accel_crc32c 00:10:55.794 ************************************ 00:10:55.794 02:16:45 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:55.794 02:16:45 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:55.794 [2024-07-11 02:16:46.023423] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:55.794 [2024-07-11 02:16:46.023485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864719 ] 00:10:55.794 [2024-07-11 02:16:46.157515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.794 [2024-07-11 02:16:46.209112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:56.054 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:56.055 02:16:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:56.994 02:16:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:57.254 02:16:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:57.254 00:10:57.254 real 0m1.436s 00:10:57.254 user 0m1.243s 00:10:57.254 sys 0m0.198s 00:10:57.254 02:16:47 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.254 02:16:47 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:57.254 ************************************ 00:10:57.254 END TEST accel_crc32c 00:10:57.254 ************************************ 00:10:57.254 02:16:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:57.254 02:16:47 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:57.254 02:16:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:57.254 02:16:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.254 02:16:47 accel -- common/autotest_common.sh@10 -- # set +x 00:10:57.254 ************************************ 00:10:57.254 START TEST accel_crc32c_C2 00:10:57.254 ************************************ 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:57.254 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:57.254 [2024-07-11 02:16:47.538951] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:57.254 [2024-07-11 02:16:47.539012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864910 ] 00:10:57.254 [2024-07-11 02:16:47.671529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.515 [2024-07-11 02:16:47.724012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:57.515 02:16:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.895 00:10:58.895 real 0m1.435s 00:10:58.895 user 0m1.235s 00:10:58.895 sys 0m0.199s 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.895 02:16:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:58.895 ************************************ 00:10:58.895 END TEST accel_crc32c_C2 00:10:58.895 ************************************ 00:10:58.895 02:16:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:58.895 02:16:48 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:58.895 02:16:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:58.895 02:16:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.895 02:16:48 accel -- common/autotest_common.sh@10 -- # set +x 00:10:58.895 ************************************ 00:10:58.895 START TEST accel_copy 00:10:58.895 ************************************ 00:10:58.895 02:16:49 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:58.895 02:16:49 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:58.895 [2024-07-11 02:16:49.070839] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:10:58.895 [2024-07-11 02:16:49.070965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865112 ] 00:10:58.895 [2024-07-11 02:16:49.275552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.155 [2024-07-11 02:16:49.330654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.155 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:59.156 02:16:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:00.535 02:16:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:00.535 02:16:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:00.535 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:11:00.536 02:16:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.536 00:11:00.536 real 0m1.530s 00:11:00.536 user 0m1.270s 00:11:00.536 sys 0m0.256s 00:11:00.536 02:16:50 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.536 02:16:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:11:00.536 ************************************ 00:11:00.536 END TEST accel_copy 00:11:00.536 ************************************ 00:11:00.536 02:16:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:00.536 02:16:50 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:00.536 02:16:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:00.536 02:16:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.536 02:16:50 accel -- common/autotest_common.sh@10 -- # set +x 00:11:00.536 ************************************ 00:11:00.536 START TEST accel_fill 00:11:00.536 ************************************ 00:11:00.536 02:16:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:11:00.536 [2024-07-11 02:16:50.671266] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:00.536 [2024-07-11 02:16:50.671328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865303 ] 00:11:00.536 [2024-07-11 02:16:50.805512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.536 [2024-07-11 02:16:50.857837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:00.536 02:16:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:11:01.917 02:16:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:01.917 00:11:01.917 real 0m1.454s 00:11:01.917 user 0m1.239s 00:11:01.917 sys 0m0.203s 00:11:01.917 02:16:52 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.917 02:16:52 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:11:01.917 ************************************ 00:11:01.917 END TEST accel_fill 00:11:01.917 ************************************ 00:11:01.917 02:16:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:01.917 02:16:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:01.917 02:16:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:01.917 02:16:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.917 02:16:52 accel -- common/autotest_common.sh@10 -- # set +x 00:11:01.917 ************************************ 00:11:01.917 START TEST accel_copy_crc32c 00:11:01.917 ************************************ 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:01.917 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:01.917 [2024-07-11 02:16:52.209124] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:01.917 [2024-07-11 02:16:52.209189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865509 ] 00:11:02.177 [2024-07-11 02:16:52.347872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.177 [2024-07-11 02:16:52.403420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:02.177 02:16:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:03.558 00:11:03.558 real 0m1.460s 00:11:03.558 user 0m1.254s 00:11:03.558 sys 0m0.200s 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.558 02:16:53 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:03.558 ************************************ 00:11:03.558 END TEST accel_copy_crc32c 00:11:03.558 ************************************ 00:11:03.558 02:16:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:03.558 02:16:53 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:03.558 02:16:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:03.558 02:16:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.558 02:16:53 accel -- common/autotest_common.sh@10 -- # set +x 00:11:03.558 ************************************ 00:11:03.558 START TEST accel_copy_crc32c_C2 00:11:03.558 ************************************ 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:03.558 02:16:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:03.558 [2024-07-11 02:16:53.748499] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:03.558 [2024-07-11 02:16:53.748563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865779 ] 00:11:03.558 [2024-07-11 02:16:53.882906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.558 [2024-07-11 02:16:53.932274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:03.818 02:16:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:04.757 00:11:04.757 real 0m1.444s 00:11:04.757 user 0m1.250s 00:11:04.757 sys 0m0.186s 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.757 02:16:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:04.757 ************************************ 00:11:04.757 END TEST accel_copy_crc32c_C2 00:11:04.757 ************************************ 00:11:05.017 02:16:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:05.017 02:16:55 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:05.017 02:16:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:05.017 02:16:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.017 02:16:55 accel -- common/autotest_common.sh@10 -- # set +x 00:11:05.017 ************************************ 00:11:05.017 START TEST accel_dualcast 00:11:05.017 ************************************ 00:11:05.017 02:16:55 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:11:05.017 02:16:55 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:11:05.017 [2024-07-11 02:16:55.284403] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:05.017 [2024-07-11 02:16:55.284531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866056 ] 00:11:05.298 [2024-07-11 02:16:55.495628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.298 [2024-07-11 02:16:55.547821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:05.298 02:16:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:11:06.679 02:16:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:06.679 00:11:06.679 real 0m1.518s 00:11:06.679 user 0m1.272s 00:11:06.679 sys 0m0.244s 00:11:06.679 02:16:56 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.679 02:16:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 ************************************ 00:11:06.679 END TEST accel_dualcast 00:11:06.679 ************************************ 00:11:06.679 02:16:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:06.679 02:16:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:06.679 02:16:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:06.679 02:16:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.679 02:16:56 accel -- common/autotest_common.sh@10 -- # set +x 00:11:06.679 ************************************ 00:11:06.679 START TEST accel_compare 00:11:06.679 ************************************ 00:11:06.680 02:16:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:11:06.680 02:16:56 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:11:06.680 [2024-07-11 02:16:56.882845] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:06.680 [2024-07-11 02:16:56.882974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866252 ] 00:11:06.680 [2024-07-11 02:16:57.089576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.940 [2024-07-11 02:16:57.144852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.940 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:06.941 02:16:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:11:08.321 02:16:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:08.321 00:11:08.321 real 0m1.516s 00:11:08.321 user 0m1.239s 00:11:08.321 sys 0m0.272s 00:11:08.321 02:16:58 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.321 02:16:58 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:11:08.321 ************************************ 00:11:08.321 END TEST accel_compare 00:11:08.321 ************************************ 00:11:08.321 02:16:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:08.321 02:16:58 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:08.321 02:16:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:08.321 02:16:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.321 02:16:58 accel -- common/autotest_common.sh@10 -- # set +x 00:11:08.321 ************************************ 00:11:08.321 START TEST accel_xor 00:11:08.321 ************************************ 00:11:08.321 02:16:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:08.321 02:16:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:08.321 [2024-07-11 02:16:58.480699] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:08.321 [2024-07-11 02:16:58.480854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866452 ] 00:11:08.321 [2024-07-11 02:16:58.689457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.581 [2024-07-11 02:16:58.745607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:08.581 02:16:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:09.960 02:16:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:09.960 00:11:09.960 real 0m1.532s 00:11:09.960 user 0m1.255s 00:11:09.960 sys 0m0.272s 00:11:09.960 02:16:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.960 02:16:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:09.960 ************************************ 00:11:09.960 END TEST accel_xor 00:11:09.960 ************************************ 00:11:09.960 02:17:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:09.960 02:17:00 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:09.960 02:17:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:09.960 02:17:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.960 02:17:00 accel -- common/autotest_common.sh@10 -- # set +x 00:11:09.960 ************************************ 00:11:09.960 START TEST accel_xor 00:11:09.960 ************************************ 00:11:09.960 02:17:00 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:09.960 [2024-07-11 02:17:00.082859] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:09.960 [2024-07-11 02:17:00.082924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866652 ] 00:11:09.960 [2024-07-11 02:17:00.216410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.960 [2024-07-11 02:17:00.267870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.960 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:09.961 02:17:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:11.340 02:17:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:11.340 00:11:11.340 real 0m1.446s 00:11:11.340 user 0m1.250s 00:11:11.340 sys 0m0.192s 00:11:11.340 02:17:01 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.340 02:17:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:11.340 ************************************ 00:11:11.340 END TEST accel_xor 00:11:11.340 ************************************ 00:11:11.340 02:17:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:11.340 02:17:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:11.340 02:17:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:11.340 02:17:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.340 02:17:01 accel -- common/autotest_common.sh@10 -- # set +x 00:11:11.340 ************************************ 00:11:11.340 START TEST accel_dif_verify 00:11:11.340 ************************************ 00:11:11.340 02:17:01 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:11.340 02:17:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:11:11.340 [2024-07-11 02:17:01.614668] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:11.340 [2024-07-11 02:17:01.614729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866845 ] 00:11:11.340 [2024-07-11 02:17:01.750077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.600 [2024-07-11 02:17:01.802909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.600 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:11.601 02:17:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:11:12.978 02:17:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:12.978 00:11:12.978 real 0m1.447s 00:11:12.978 user 0m1.246s 00:11:12.978 sys 0m0.205s 00:11:12.978 02:17:03 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.978 02:17:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:11:12.978 ************************************ 00:11:12.978 END TEST accel_dif_verify 00:11:12.978 ************************************ 00:11:12.978 02:17:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:12.978 02:17:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:12.978 02:17:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:12.978 02:17:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.978 02:17:03 accel -- common/autotest_common.sh@10 -- # set +x 00:11:12.978 ************************************ 00:11:12.978 START TEST accel_dif_generate 00:11:12.978 ************************************ 00:11:12.978 02:17:03 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:11:12.978 02:17:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:11:12.978 [2024-07-11 02:17:03.147941] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:12.978 [2024-07-11 02:17:03.148004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867063 ] 00:11:12.978 [2024-07-11 02:17:03.282581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.978 [2024-07-11 02:17:03.333883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:13.237 02:17:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:11:14.171 02:17:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:14.171 00:11:14.171 real 0m1.448s 00:11:14.171 user 0m1.247s 00:11:14.171 sys 0m0.206s 00:11:14.171 02:17:04 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.171 02:17:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:11:14.171 ************************************ 00:11:14.171 END TEST accel_dif_generate 00:11:14.171 ************************************ 00:11:14.430 02:17:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:14.430 02:17:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:14.430 02:17:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:14.430 02:17:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.430 02:17:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:14.430 ************************************ 00:11:14.430 START TEST accel_dif_generate_copy 00:11:14.430 ************************************ 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:14.430 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:11:14.430 [2024-07-11 02:17:04.682468] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:14.430 [2024-07-11 02:17:04.682536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867336 ] 00:11:14.430 [2024-07-11 02:17:04.815838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.688 [2024-07-11 02:17:04.866961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.688 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:14.689 02:17:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:16.066 00:11:16.066 real 0m1.448s 00:11:16.066 user 0m1.248s 00:11:16.066 sys 0m0.203s 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.066 02:17:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:11:16.066 ************************************ 00:11:16.066 END TEST accel_dif_generate_copy 00:11:16.066 ************************************ 00:11:16.066 02:17:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:16.066 02:17:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:11:16.066 02:17:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:16.066 02:17:06 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:16.066 02:17:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.066 02:17:06 accel -- common/autotest_common.sh@10 -- # set +x 00:11:16.066 ************************************ 00:11:16.066 START TEST accel_comp 00:11:16.066 ************************************ 00:11:16.066 02:17:06 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:16.066 02:17:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:11:16.067 [2024-07-11 02:17:06.223203] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:16.067 [2024-07-11 02:17:06.223265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867595 ] 00:11:16.067 [2024-07-11 02:17:06.357771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.067 [2024-07-11 02:17:06.409185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.067 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:16.326 02:17:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:17.264 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:11:17.265 02:17:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:17.265 00:11:17.265 real 0m1.455s 00:11:17.265 user 0m1.260s 00:11:17.265 sys 0m0.198s 00:11:17.265 02:17:07 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.265 02:17:07 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:11:17.265 ************************************ 00:11:17.265 END TEST accel_comp 00:11:17.265 ************************************ 00:11:17.265 02:17:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:17.265 02:17:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:17.265 02:17:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:17.265 02:17:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.265 02:17:07 accel -- common/autotest_common.sh@10 -- # set +x 00:11:17.525 ************************************ 00:11:17.525 START TEST accel_decomp 00:11:17.525 ************************************ 00:11:17.525 02:17:07 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:11:17.525 02:17:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:11:17.525 [2024-07-11 02:17:07.759213] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:17.525 [2024-07-11 02:17:07.759279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867788 ] 00:11:17.525 [2024-07-11 02:17:07.895056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.784 [2024-07-11 02:17:07.948986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:17.784 02:17:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:19.163 02:17:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.163 00:11:19.163 real 0m1.450s 00:11:19.163 user 0m1.254s 00:11:19.163 sys 0m0.203s 00:11:19.163 02:17:09 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.163 02:17:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:11:19.163 ************************************ 00:11:19.163 END TEST accel_decomp 00:11:19.163 ************************************ 00:11:19.163 02:17:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:19.163 02:17:09 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:19.163 02:17:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:19.163 02:17:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.163 02:17:09 accel -- common/autotest_common.sh@10 -- # set +x 00:11:19.163 ************************************ 00:11:19.163 START TEST accel_decomp_full 00:11:19.163 ************************************ 00:11:19.163 02:17:09 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:11:19.163 [2024-07-11 02:17:09.294959] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:19.163 [2024-07-11 02:17:09.295021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867989 ] 00:11:19.163 [2024-07-11 02:17:09.427757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.163 [2024-07-11 02:17:09.475109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.163 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.164 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.164 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.164 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:19.164 02:17:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:19.164 02:17:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:19.164 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:19.164 02:17:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:20.553 02:17:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:20.553 00:11:20.553 real 0m1.445s 00:11:20.553 user 0m1.272s 00:11:20.553 sys 0m0.179s 00:11:20.553 02:17:10 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.553 02:17:10 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:11:20.553 ************************************ 00:11:20.553 END TEST accel_decomp_full 00:11:20.553 ************************************ 00:11:20.553 02:17:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:20.553 02:17:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:20.553 02:17:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:20.553 02:17:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.553 02:17:10 accel -- common/autotest_common.sh@10 -- # set +x 00:11:20.553 ************************************ 00:11:20.553 START TEST accel_decomp_mcore 00:11:20.553 ************************************ 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:20.553 02:17:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:20.553 [2024-07-11 02:17:10.819491] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:20.553 [2024-07-11 02:17:10.819551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868185 ] 00:11:20.553 [2024-07-11 02:17:10.950975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.813 [2024-07-11 02:17:11.002319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.813 [2024-07-11 02:17:11.002420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.813 [2024-07-11 02:17:11.002520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.813 [2024-07-11 02:17:11.002523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.813 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:20.814 02:17:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.191 00:11:22.191 real 0m1.442s 00:11:22.191 user 0m4.687s 00:11:22.191 sys 0m0.181s 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.191 02:17:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:22.191 ************************************ 00:11:22.191 END TEST accel_decomp_mcore 00:11:22.191 ************************************ 00:11:22.191 02:17:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:22.191 02:17:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:22.191 02:17:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:22.191 02:17:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.191 02:17:12 accel -- common/autotest_common.sh@10 -- # set +x 00:11:22.191 ************************************ 00:11:22.191 START TEST accel_decomp_full_mcore 00:11:22.191 ************************************ 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:22.191 [2024-07-11 02:17:12.345499] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:22.191 [2024-07-11 02:17:12.345560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868386 ] 00:11:22.191 [2024-07-11 02:17:12.479318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.191 [2024-07-11 02:17:12.531833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.191 [2024-07-11 02:17:12.532034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.191 [2024-07-11 02:17:12.531934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.191 [2024-07-11 02:17:12.532033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.191 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.192 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:22.451 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:22.452 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:22.452 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:22.452 02:17:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.388 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:23.389 00:11:23.389 real 0m1.466s 00:11:23.389 user 0m4.741s 00:11:23.389 sys 0m0.212s 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.389 02:17:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:23.389 ************************************ 00:11:23.389 END TEST accel_decomp_full_mcore 00:11:23.389 ************************************ 00:11:23.648 02:17:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:23.648 02:17:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:23.648 02:17:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:23.648 02:17:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.648 02:17:13 accel -- common/autotest_common.sh@10 -- # set +x 00:11:23.648 ************************************ 00:11:23.648 START TEST accel_decomp_mthread 00:11:23.648 ************************************ 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:23.648 02:17:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:23.648 [2024-07-11 02:17:13.885862] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:23.648 [2024-07-11 02:17:13.885921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868585 ] 00:11:23.648 [2024-07-11 02:17:14.021206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.908 [2024-07-11 02:17:14.074585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:23.908 02:17:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:25.288 00:11:25.288 real 0m1.438s 00:11:25.288 user 0m1.233s 00:11:25.288 sys 0m0.205s 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.288 02:17:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:25.288 ************************************ 00:11:25.288 END TEST accel_decomp_mthread 00:11:25.288 ************************************ 00:11:25.288 02:17:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:25.288 02:17:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:25.288 02:17:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:25.288 02:17:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.288 02:17:15 accel -- common/autotest_common.sh@10 -- # set +x 00:11:25.288 ************************************ 00:11:25.288 START TEST accel_decomp_full_mthread 00:11:25.288 ************************************ 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:25.288 [2024-07-11 02:17:15.419081] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:25.288 [2024-07-11 02:17:15.419144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868792 ] 00:11:25.288 [2024-07-11 02:17:15.552803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.288 [2024-07-11 02:17:15.604225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.288 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:25.289 02:17:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:26.669 00:11:26.669 real 0m1.481s 00:11:26.669 user 0m1.271s 00:11:26.669 sys 0m0.214s 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.669 02:17:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:26.669 ************************************ 00:11:26.669 END TEST accel_decomp_full_mthread 00:11:26.669 ************************************ 00:11:26.669 02:17:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:26.669 02:17:16 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:11:26.669 02:17:16 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:11:26.669 02:17:16 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:11:26.669 02:17:16 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:26.669 02:17:16 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1869059 00:11:26.669 02:17:16 accel -- accel/accel.sh@63 -- # waitforlisten 1869059 00:11:26.669 02:17:16 accel -- common/autotest_common.sh@829 -- # '[' -z 1869059 ']' 00:11:26.669 02:17:16 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.669 02:17:16 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.669 02:17:16 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:26.669 02:17:16 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:26.669 02:17:16 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.669 02:17:16 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.669 02:17:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:26.669 02:17:16 accel -- common/autotest_common.sh@10 -- # set +x 00:11:26.669 02:17:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:26.669 02:17:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.669 02:17:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.669 02:17:16 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:26.669 02:17:16 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:26.669 02:17:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:26.669 02:17:16 accel -- accel/accel.sh@41 -- # jq -r . 00:11:26.669 [2024-07-11 02:17:17.035305] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:26.669 [2024-07-11 02:17:17.035451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869059 ] 00:11:26.929 [2024-07-11 02:17:17.248412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.929 [2024-07-11 02:17:17.299549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.867 [2024-07-11 02:17:17.955285] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:27.867 02:17:18 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.868 02:17:18 accel -- common/autotest_common.sh@862 -- # return 0 00:11:27.868 02:17:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:27.868 02:17:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:27.868 02:17:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:27.868 02:17:18 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:11:27.868 02:17:18 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:11:27.868 02:17:18 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:11:27.868 02:17:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.868 02:17:18 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:11:27.868 02:17:18 accel -- common/autotest_common.sh@10 -- # set +x 00:11:27.868 02:17:18 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:11:27.868 02:17:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.868 "method": "compressdev_scan_accel_module", 00:11:27.868 02:17:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:27.868 02:17:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:27.868 02:17:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:27.868 02:17:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.868 02:17:18 accel -- common/autotest_common.sh@10 -- # set +x 00:11:28.128 02:17:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.128 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.128 02:17:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:28.128 02:17:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:28.129 02:17:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:28.129 02:17:18 accel -- accel/accel.sh@75 -- # killprocess 1869059 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@948 -- # '[' -z 1869059 ']' 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@952 -- # kill -0 1869059 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@953 -- # uname 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1869059 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1869059' 00:11:28.129 killing process with pid 1869059 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@967 -- # kill 1869059 00:11:28.129 02:17:18 accel -- common/autotest_common.sh@972 -- # wait 1869059 00:11:28.389 02:17:18 accel -- accel/accel.sh@76 -- # trap - ERR 00:11:28.389 02:17:18 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:28.389 02:17:18 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:28.389 02:17:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.389 02:17:18 accel -- common/autotest_common.sh@10 -- # set +x 00:11:28.389 ************************************ 00:11:28.389 START TEST accel_cdev_comp 00:11:28.389 ************************************ 00:11:28.389 02:17:18 accel.accel_cdev_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:11:28.389 02:17:18 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:11:28.648 [2024-07-11 02:17:18.837034] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:28.648 [2024-07-11 02:17:18.837093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869331 ] 00:11:28.648 [2024-07-11 02:17:18.971249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.648 [2024-07-11 02:17:19.022498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.587 [2024-07-11 02:17:19.685088] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:29.587 [2024-07-11 02:17:19.687707] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x233b6e0 PMD being used: compress_qat 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 [2024-07-11 02:17:19.691690] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x233d5c0 PMD being used: compress_qat 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.587 02:17:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:11:30.527 02:17:20 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:11:30.527 00:11:30.527 real 0m2.035s 00:11:30.527 user 0m1.468s 00:11:30.527 sys 0m0.564s 00:11:30.527 02:17:20 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.527 02:17:20 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:11:30.527 ************************************ 00:11:30.527 END TEST accel_cdev_comp 00:11:30.527 ************************************ 00:11:30.527 02:17:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:30.527 02:17:20 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:30.527 02:17:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:30.527 02:17:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.527 02:17:20 accel -- common/autotest_common.sh@10 -- # set +x 00:11:30.527 ************************************ 00:11:30.527 START TEST accel_cdev_decomp 00:11:30.527 ************************************ 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:11:30.527 02:17:20 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:11:30.787 [2024-07-11 02:17:20.957670] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:30.787 [2024-07-11 02:17:20.957732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869584 ] 00:11:30.787 [2024-07-11 02:17:21.092152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.787 [2024-07-11 02:17:21.143535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.725 [2024-07-11 02:17:21.802595] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:31.725 [2024-07-11 02:17:21.805171] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x8fb6e0 PMD being used: compress_qat 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 [2024-07-11 02:17:21.809252] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x8fd5c0 PMD being used: compress_qat 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.725 02:17:21 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:11:32.666 00:11:32.666 real 0m2.047s 00:11:32.666 user 0m1.473s 00:11:32.666 sys 0m0.576s 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.666 02:17:22 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:11:32.666 ************************************ 00:11:32.666 END TEST accel_cdev_decomp 00:11:32.666 ************************************ 00:11:32.666 02:17:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:32.666 02:17:23 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:32.666 02:17:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:32.666 02:17:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.666 02:17:23 accel -- common/autotest_common.sh@10 -- # set +x 00:11:32.666 ************************************ 00:11:32.666 START TEST accel_cdev_decomp_full 00:11:32.666 ************************************ 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.666 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:32.667 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:32.667 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:11:32.667 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:11:32.667 [2024-07-11 02:17:23.087470] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:32.667 [2024-07-11 02:17:23.087532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869897 ] 00:11:32.958 [2024-07-11 02:17:23.220266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.958 [2024-07-11 02:17:23.271304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.555 [2024-07-11 02:17:23.924879] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:33.555 [2024-07-11 02:17:23.927460] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23f76e0 PMD being used: compress_qat 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 [2024-07-11 02:17:23.930524] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23f7780 PMD being used: compress_qat 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.555 02:17:23 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:34.935 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:34.936 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:34.936 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:11:34.936 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:34.936 02:17:25 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:11:34.936 00:11:34.936 real 0m2.022s 00:11:34.936 user 0m1.459s 00:11:34.936 sys 0m0.569s 00:11:34.936 02:17:25 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.936 02:17:25 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:11:34.936 ************************************ 00:11:34.936 END TEST accel_cdev_decomp_full 00:11:34.936 ************************************ 00:11:34.936 02:17:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:34.936 02:17:25 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:34.936 02:17:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:34.936 02:17:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.936 02:17:25 accel -- common/autotest_common.sh@10 -- # set +x 00:11:34.936 ************************************ 00:11:34.936 START TEST accel_cdev_decomp_mcore 00:11:34.936 ************************************ 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:34.936 02:17:25 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:34.936 [2024-07-11 02:17:25.200922] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:34.936 [2024-07-11 02:17:25.200986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870251 ] 00:11:34.936 [2024-07-11 02:17:25.336173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.196 [2024-07-11 02:17:25.392025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.196 [2024-07-11 02:17:25.392124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.196 [2024-07-11 02:17:25.392224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.196 [2024-07-11 02:17:25.392225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.765 [2024-07-11 02:17:26.050657] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:35.765 [2024-07-11 02:17:26.053265] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x168bd30 PMD being used: compress_qat 00:11:35.765 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.765 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.765 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.765 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.765 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 [2024-07-11 02:17:26.058956] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa30419b8b0 PMD being used: compress_qat 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.766 [2024-07-11 02:17:26.060356] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x168e130 PMD being used: compress_qat 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.766 [2024-07-11 02:17:26.064474] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa2fc19b8b0 PMD being used: compress_qat 00:11:35.766 [2024-07-11 02:17:26.064692] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa2f419b8b0 PMD being used: compress_qat 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.766 02:17:26 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:11:37.147 00:11:37.147 real 0m2.075s 00:11:37.147 user 0m6.758s 00:11:37.147 sys 0m0.586s 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.147 02:17:27 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:37.147 ************************************ 00:11:37.147 END TEST accel_cdev_decomp_mcore 00:11:37.147 ************************************ 00:11:37.147 02:17:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:37.147 02:17:27 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:37.147 02:17:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:37.147 02:17:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.147 02:17:27 accel -- common/autotest_common.sh@10 -- # set +x 00:11:37.147 ************************************ 00:11:37.147 START TEST accel_cdev_decomp_full_mcore 00:11:37.147 ************************************ 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:37.147 02:17:27 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:37.147 [2024-07-11 02:17:27.362893] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:37.147 [2024-07-11 02:17:27.362958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870482 ] 00:11:37.147 [2024-07-11 02:17:27.498646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.147 [2024-07-11 02:17:27.554610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.148 [2024-07-11 02:17:27.554712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.148 [2024-07-11 02:17:27.554812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.148 [2024-07-11 02:17:27.554814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.085 [2024-07-11 02:17:28.219315] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:38.085 [2024-07-11 02:17:28.221908] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xdaad30 PMD being used: compress_qat 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.085 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 [2024-07-11 02:17:28.226667] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f935419b8b0 PMD being used: compress_qat 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.086 [2024-07-11 02:17:28.228169] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xdaaae0 PMD being used: compress_qat 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.086 [2024-07-11 02:17:28.232444] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f934c19b8b0 PMD being used: compress_qat 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 [2024-07-11 02:17:28.232680] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f934419b8b0 PMD being used: compress_qat 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.086 02:17:28 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:11:39.025 00:11:39.025 real 0m2.066s 00:11:39.025 user 0m6.719s 00:11:39.025 sys 0m0.596s 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.025 02:17:29 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:39.025 ************************************ 00:11:39.025 END TEST accel_cdev_decomp_full_mcore 00:11:39.025 ************************************ 00:11:39.025 02:17:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:39.025 02:17:29 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:39.025 02:17:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:39.025 02:17:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.025 02:17:29 accel -- common/autotest_common.sh@10 -- # set +x 00:11:39.285 ************************************ 00:11:39.285 START TEST accel_cdev_decomp_mthread 00:11:39.285 ************************************ 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:39.285 02:17:29 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:39.285 [2024-07-11 02:17:29.512134] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:39.285 [2024-07-11 02:17:29.512198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870848 ] 00:11:39.285 [2024-07-11 02:17:29.645988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.285 [2024-07-11 02:17:29.697112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.225 [2024-07-11 02:17:30.361534] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:40.225 [2024-07-11 02:17:30.364128] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dc06e0 PMD being used: compress_qat 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 [2024-07-11 02:17:30.368940] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dc29c0 PMD being used: compress_qat 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 [2024-07-11 02:17:30.371553] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dc4930 PMD being used: compress_qat 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.225 02:17:30 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.163 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:41.163 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.163 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.163 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.163 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:41.163 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.163 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:11:41.164 00:11:41.164 real 0m2.056s 00:11:41.164 user 0m1.483s 00:11:41.164 sys 0m0.576s 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.164 02:17:31 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:41.164 ************************************ 00:11:41.164 END TEST accel_cdev_decomp_mthread 00:11:41.164 ************************************ 00:11:41.164 02:17:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:41.164 02:17:31 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:41.164 02:17:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:41.164 02:17:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.164 02:17:31 accel -- common/autotest_common.sh@10 -- # set +x 00:11:41.424 ************************************ 00:11:41.424 START TEST accel_cdev_decomp_full_mthread 00:11:41.424 ************************************ 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:41.424 02:17:31 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:41.424 [2024-07-11 02:17:31.663225] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:41.424 [2024-07-11 02:17:31.663354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871051 ] 00:11:41.683 [2024-07-11 02:17:31.872877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.683 [2024-07-11 02:17:31.928705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.249 [2024-07-11 02:17:32.584514] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:11:42.249 [2024-07-11 02:17:32.587107] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x15d06e0 PMD being used: compress_qat 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.249 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.249 [2024-07-11 02:17:32.590933] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x15d0490 PMD being used: compress_qat 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:11:42.250 [2024-07-11 02:17:32.593851] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x15d5500 PMD being used: compress_qat 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:42.250 02:17:32 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:11:43.626 00:11:43.626 real 0m2.123s 00:11:43.626 user 0m1.469s 00:11:43.626 sys 0m0.647s 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.626 02:17:33 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:43.626 ************************************ 00:11:43.626 END TEST accel_cdev_decomp_full_mthread 00:11:43.626 ************************************ 00:11:43.626 02:17:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:43.626 02:17:33 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:11:43.626 02:17:33 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:43.626 02:17:33 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:43.626 02:17:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.626 02:17:33 accel -- common/autotest_common.sh@10 -- # set +x 00:11:43.626 02:17:33 accel -- accel/accel.sh@137 -- # build_accel_config 00:11:43.626 02:17:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:43.626 02:17:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:43.626 02:17:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.626 02:17:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.626 02:17:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:43.626 02:17:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:43.626 02:17:33 accel -- accel/accel.sh@41 -- # jq -r . 00:11:43.626 ************************************ 00:11:43.626 START TEST accel_dif_functional_tests 00:11:43.626 ************************************ 00:11:43.626 02:17:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:43.626 [2024-07-11 02:17:33.880769] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:43.626 [2024-07-11 02:17:33.880830] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871420 ] 00:11:43.626 [2024-07-11 02:17:34.016709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.885 [2024-07-11 02:17:34.070580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.885 [2024-07-11 02:17:34.070680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.885 [2024-07-11 02:17:34.070683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.885 00:11:43.885 00:11:43.885 CUnit - A unit testing framework for C - Version 2.1-3 00:11:43.885 http://cunit.sourceforge.net/ 00:11:43.885 00:11:43.885 00:11:43.885 Suite: accel_dif 00:11:43.885 Test: verify: DIF generated, GUARD check ...passed 00:11:43.885 Test: verify: DIF generated, APPTAG check ...passed 00:11:43.885 Test: verify: DIF generated, REFTAG check ...passed 00:11:43.885 Test: verify: DIF not generated, GUARD check ...[2024-07-11 02:17:34.175164] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:43.885 passed 00:11:43.885 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 02:17:34.175236] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:43.885 passed 00:11:43.885 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 02:17:34.175273] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:43.885 passed 00:11:43.885 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:43.885 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 02:17:34.175348] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:43.885 passed 00:11:43.885 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:43.885 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:43.885 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:43.885 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 02:17:34.175506] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:43.885 passed 00:11:43.885 Test: verify copy: DIF generated, GUARD check ...passed 00:11:43.885 Test: verify copy: DIF generated, APPTAG check ...passed 00:11:43.885 Test: verify copy: DIF generated, REFTAG check ...passed 00:11:43.885 Test: verify copy: DIF not generated, GUARD check ...[2024-07-11 02:17:34.175681] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:43.885 passed 00:11:43.885 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-11 02:17:34.175719] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:43.885 passed 00:11:43.885 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-11 02:17:34.175755] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:43.885 passed 00:11:43.885 Test: generate copy: DIF generated, GUARD check ...passed 00:11:43.885 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:43.885 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:43.885 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:43.885 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:43.885 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:43.885 Test: generate copy: iovecs-len validate ...[2024-07-11 02:17:34.176038] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:43.885 passed 00:11:43.885 Test: generate copy: buffer alignment validate ...passed 00:11:43.885 00:11:43.885 Run Summary: Type Total Ran Passed Failed Inactive 00:11:43.885 suites 1 1 n/a 0 0 00:11:43.885 tests 26 26 26 0 0 00:11:43.885 asserts 115 115 115 0 n/a 00:11:43.885 00:11:43.885 Elapsed time = 0.003 seconds 00:11:44.143 00:11:44.143 real 0m0.551s 00:11:44.143 user 0m0.766s 00:11:44.143 sys 0m0.252s 00:11:44.143 02:17:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.143 02:17:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:11:44.143 ************************************ 00:11:44.143 END TEST accel_dif_functional_tests 00:11:44.143 ************************************ 00:11:44.143 02:17:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:44.143 00:11:44.143 real 0m51.444s 00:11:44.143 user 0m58.469s 00:11:44.143 sys 0m12.504s 00:11:44.143 02:17:34 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.143 02:17:34 accel -- common/autotest_common.sh@10 -- # set +x 00:11:44.143 ************************************ 00:11:44.143 END TEST accel 00:11:44.143 ************************************ 00:11:44.143 02:17:34 -- common/autotest_common.sh@1142 -- # return 0 00:11:44.143 02:17:34 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:44.143 02:17:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:44.143 02:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.143 02:17:34 -- common/autotest_common.sh@10 -- # set +x 00:11:44.143 ************************************ 00:11:44.143 START TEST accel_rpc 00:11:44.143 ************************************ 00:11:44.143 02:17:34 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:44.401 * Looking for test storage... 00:11:44.401 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:11:44.401 02:17:34 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:44.401 02:17:34 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1871588 00:11:44.401 02:17:34 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1871588 00:11:44.401 02:17:34 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:44.401 02:17:34 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1871588 ']' 00:11:44.401 02:17:34 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.401 02:17:34 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.401 02:17:34 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.401 02:17:34 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.401 02:17:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.401 [2024-07-11 02:17:34.684355] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:44.401 [2024-07-11 02:17:34.684427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871588 ] 00:11:44.401 [2024-07-11 02:17:34.821993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.659 [2024-07-11 02:17:34.873863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.228 02:17:35 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.228 02:17:35 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:45.228 02:17:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:45.228 02:17:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:45.228 02:17:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:45.228 02:17:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:45.228 02:17:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:45.228 02:17:35 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:45.228 02:17:35 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.228 02:17:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.228 ************************************ 00:11:45.228 START TEST accel_assign_opcode 00:11:45.228 ************************************ 00:11:45.228 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:11:45.228 02:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:45.228 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.228 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.228 [2024-07-11 02:17:35.652270] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.488 [2024-07-11 02:17:35.660287] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.488 02:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:45.489 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.489 02:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:45.489 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.489 02:17:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:11:45.489 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.748 software 00:11:45.748 00:11:45.748 real 0m0.293s 00:11:45.748 user 0m0.049s 00:11:45.748 sys 0m0.014s 00:11:45.748 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.748 02:17:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:45.748 ************************************ 00:11:45.748 END TEST accel_assign_opcode 00:11:45.748 ************************************ 00:11:45.748 02:17:35 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:11:45.748 02:17:35 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1871588 00:11:45.748 02:17:35 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1871588 ']' 00:11:45.748 02:17:35 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1871588 00:11:45.748 02:17:35 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:11:45.748 02:17:35 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.748 02:17:35 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1871588 00:11:45.748 02:17:36 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:45.748 02:17:36 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:45.748 02:17:36 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1871588' 00:11:45.748 killing process with pid 1871588 00:11:45.748 02:17:36 accel_rpc -- common/autotest_common.sh@967 -- # kill 1871588 00:11:45.748 02:17:36 accel_rpc -- common/autotest_common.sh@972 -- # wait 1871588 00:11:46.006 00:11:46.006 real 0m1.881s 00:11:46.006 user 0m1.928s 00:11:46.006 sys 0m0.614s 00:11:46.006 02:17:36 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.006 02:17:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.006 ************************************ 00:11:46.006 END TEST accel_rpc 00:11:46.006 ************************************ 00:11:46.006 02:17:36 -- common/autotest_common.sh@1142 -- # return 0 00:11:46.006 02:17:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:11:46.006 02:17:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:46.006 02:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.006 02:17:36 -- common/autotest_common.sh@10 -- # set +x 00:11:46.266 ************************************ 00:11:46.266 START TEST app_cmdline 00:11:46.266 ************************************ 00:11:46.266 02:17:36 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:11:46.266 * Looking for test storage... 00:11:46.266 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:11:46.266 02:17:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:46.266 02:17:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1871903 00:11:46.266 02:17:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1871903 00:11:46.266 02:17:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:46.266 02:17:36 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1871903 ']' 00:11:46.266 02:17:36 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.266 02:17:36 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.266 02:17:36 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.266 02:17:36 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.266 02:17:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:46.266 [2024-07-11 02:17:36.649820] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:46.266 [2024-07-11 02:17:36.649891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871903 ] 00:11:46.526 [2024-07-11 02:17:36.787520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.526 [2024-07-11 02:17:36.839421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.785 02:17:37 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.785 02:17:37 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:11:46.785 02:17:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:47.045 { 00:11:47.045 "version": "SPDK v24.09-pre git sha1 9937c0160", 00:11:47.045 "fields": { 00:11:47.045 "major": 24, 00:11:47.045 "minor": 9, 00:11:47.045 "patch": 0, 00:11:47.045 "suffix": "-pre", 00:11:47.045 "commit": "9937c0160" 00:11:47.045 } 00:11:47.045 } 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:47.045 02:17:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:47.045 02:17:37 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:47.614 request: 00:11:47.614 { 00:11:47.614 "method": "env_dpdk_get_mem_stats", 00:11:47.614 "req_id": 1 00:11:47.614 } 00:11:47.614 Got JSON-RPC error response 00:11:47.614 response: 00:11:47.614 { 00:11:47.614 "code": -32601, 00:11:47.614 "message": "Method not found" 00:11:47.614 } 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.614 02:17:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1871903 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1871903 ']' 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1871903 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1871903 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1871903' 00:11:47.614 killing process with pid 1871903 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@967 -- # kill 1871903 00:11:47.614 02:17:37 app_cmdline -- common/autotest_common.sh@972 -- # wait 1871903 00:11:47.873 00:11:47.873 real 0m1.803s 00:11:47.873 user 0m2.280s 00:11:47.873 sys 0m0.646s 00:11:47.873 02:17:38 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.873 02:17:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:47.873 ************************************ 00:11:47.873 END TEST app_cmdline 00:11:47.873 ************************************ 00:11:48.133 02:17:38 -- common/autotest_common.sh@1142 -- # return 0 00:11:48.133 02:17:38 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:11:48.133 02:17:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.133 02:17:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.133 02:17:38 -- common/autotest_common.sh@10 -- # set +x 00:11:48.133 ************************************ 00:11:48.133 START TEST version 00:11:48.133 ************************************ 00:11:48.133 02:17:38 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:11:48.133 * Looking for test storage... 00:11:48.134 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:11:48.134 02:17:38 version -- app/version.sh@17 -- # get_header_version major 00:11:48.134 02:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # cut -f2 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:48.134 02:17:38 version -- app/version.sh@17 -- # major=24 00:11:48.134 02:17:38 version -- app/version.sh@18 -- # get_header_version minor 00:11:48.134 02:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # cut -f2 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:48.134 02:17:38 version -- app/version.sh@18 -- # minor=9 00:11:48.134 02:17:38 version -- app/version.sh@19 -- # get_header_version patch 00:11:48.134 02:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # cut -f2 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:48.134 02:17:38 version -- app/version.sh@19 -- # patch=0 00:11:48.134 02:17:38 version -- app/version.sh@20 -- # get_header_version suffix 00:11:48.134 02:17:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # cut -f2 00:11:48.134 02:17:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:48.134 02:17:38 version -- app/version.sh@20 -- # suffix=-pre 00:11:48.134 02:17:38 version -- app/version.sh@22 -- # version=24.9 00:11:48.134 02:17:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:48.134 02:17:38 version -- app/version.sh@28 -- # version=24.9rc0 00:11:48.134 02:17:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:11:48.134 02:17:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:48.394 02:17:38 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:48.394 02:17:38 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:48.394 00:11:48.394 real 0m0.199s 00:11:48.394 user 0m0.097s 00:11:48.394 sys 0m0.153s 00:11:48.394 02:17:38 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.394 02:17:38 version -- common/autotest_common.sh@10 -- # set +x 00:11:48.394 ************************************ 00:11:48.394 END TEST version 00:11:48.394 ************************************ 00:11:48.394 02:17:38 -- common/autotest_common.sh@1142 -- # return 0 00:11:48.394 02:17:38 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:11:48.394 02:17:38 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:11:48.394 02:17:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.394 02:17:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.394 02:17:38 -- common/autotest_common.sh@10 -- # set +x 00:11:48.394 ************************************ 00:11:48.394 START TEST blockdev_general 00:11:48.394 ************************************ 00:11:48.394 02:17:38 blockdev_general -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:11:48.394 * Looking for test storage... 00:11:48.394 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:48.394 02:17:38 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1872332 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:48.394 02:17:38 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 1872332 00:11:48.394 02:17:38 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 1872332 ']' 00:11:48.394 02:17:38 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.394 02:17:38 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.394 02:17:38 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.394 02:17:38 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.394 02:17:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:48.654 [2024-07-11 02:17:38.855936] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:48.654 [2024-07-11 02:17:38.856011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872332 ] 00:11:48.654 [2024-07-11 02:17:38.996753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.654 [2024-07-11 02:17:39.044611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.592 02:17:39 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.592 02:17:39 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:11:49.592 02:17:39 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:11:49.592 02:17:39 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:11:49.592 02:17:39 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:49.592 02:17:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.592 02:17:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.851 [2024-07-11 02:17:40.024270] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:49.851 [2024-07-11 02:17:40.024327] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:49.851 00:11:49.851 [2024-07-11 02:17:40.032246] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:49.851 [2024-07-11 02:17:40.032274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:49.851 00:11:49.851 Malloc0 00:11:49.852 Malloc1 00:11:49.852 Malloc2 00:11:49.852 Malloc3 00:11:49.852 Malloc4 00:11:49.852 Malloc5 00:11:49.852 Malloc6 00:11:49.852 Malloc7 00:11:49.852 Malloc8 00:11:49.852 Malloc9 00:11:49.852 [2024-07-11 02:17:40.178963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:49.852 [2024-07-11 02:17:40.179012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.852 [2024-07-11 02:17:40.179031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1541030 00:11:49.852 [2024-07-11 02:17:40.179044] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.852 [2024-07-11 02:17:40.180386] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.852 [2024-07-11 02:17:40.180416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:49.852 TestPT 00:11:49.852 02:17:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.852 02:17:40 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:49.852 5000+0 records in 00:11:49.852 5000+0 records out 00:11:49.852 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0254146 s, 403 MB/s 00:11:49.852 02:17:40 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:11:49.852 02:17:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.852 02:17:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 AIO0 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.111 02:17:40 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:11:50.111 02:17:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.373 02:17:40 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.373 02:17:40 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:11:50.373 02:17:40 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:11:50.374 02:17:40 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "eac639cf-8745-47cf-a863-c95f53c74531"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "eac639cf-8745-47cf-a863-c95f53c74531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "647f4f90-93a2-5a9c-a21d-0231dfe466d2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "647f4f90-93a2-5a9c-a21d-0231dfe466d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1e097a58-cc49-51ce-82fe-daf8eb12d48f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1e097a58-cc49-51ce-82fe-daf8eb12d48f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b2380544-dd6c-54ba-947d-4603ae673ccc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b2380544-dd6c-54ba-947d-4603ae673ccc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "edc7705c-ecfc-5cf3-b63d-89de3f142494"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "edc7705c-ecfc-5cf3-b63d-89de3f142494",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "4024770e-9d14-5637-9dfd-3b802cf7ea34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4024770e-9d14-5637-9dfd-3b802cf7ea34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "eed8e0e1-4003-54ee-a423-2b9b3d78daac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eed8e0e1-4003-54ee-a423-2b9b3d78daac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e9d9fee2-f10b-5b6f-be99-2ee03778fdae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e9d9fee2-f10b-5b6f-be99-2ee03778fdae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "19b100be-13c5-5345-a63a-2e2663d34245"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19b100be-13c5-5345-a63a-2e2663d34245",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "c38b679e-384a-529e-a986-79d577e1d80f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c38b679e-384a-529e-a986-79d577e1d80f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c60b8d1f-e483-5d99-b116-d937d08ab2fb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c60b8d1f-e483-5d99-b116-d937d08ab2fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "579f4ca9-0089-5597-8289-251ce3a8ec1d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "579f4ca9-0089-5597-8289-251ce3a8ec1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8724a74c-99f2-4913-81e6-b34cecbb4f5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "14c633e2-ee3c-4876-92d2-59b502bd06fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3d31f469-26b8-4927-afa3-4df6438a48fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4a73af42-0046-49a5-ae91-514b31baa29b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "f986ff18-4c81-490f-80c6-bda034326949"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f986ff18-4c81-490f-80c6-bda034326949",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f986ff18-4c81-490f-80c6-bda034326949",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "89b5a896-edf0-4d50-9e57-b290524841cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "91da19cb-52bf-4a78-92b6-a03b6b9dac0b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "46ace2a0-d84c-42b3-baac-f03661a7e125"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "46ace2a0-d84c-42b3-baac-f03661a7e125",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:50.374 02:17:40 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:11:50.375 02:17:40 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:11:50.375 02:17:40 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:11:50.375 02:17:40 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 1872332 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 1872332 ']' 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 1872332 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1872332 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1872332' 00:11:50.375 killing process with pid 1872332 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@967 -- # kill 1872332 00:11:50.375 02:17:40 blockdev_general -- common/autotest_common.sh@972 -- # wait 1872332 00:11:50.943 02:17:41 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:50.943 02:17:41 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:50.943 02:17:41 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:50.943 02:17:41 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.943 02:17:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.943 ************************************ 00:11:50.943 START TEST bdev_hello_world 00:11:50.943 ************************************ 00:11:50.943 02:17:41 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:50.943 [2024-07-11 02:17:41.238247] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:50.943 [2024-07-11 02:17:41.238309] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872583 ] 00:11:51.203 [2024-07-11 02:17:41.376239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.203 [2024-07-11 02:17:41.427549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.203 [2024-07-11 02:17:41.585792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.203 [2024-07-11 02:17:41.585847] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:51.203 [2024-07-11 02:17:41.585861] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:51.203 [2024-07-11 02:17:41.593797] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.203 [2024-07-11 02:17:41.593829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.203 [2024-07-11 02:17:41.601808] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.203 [2024-07-11 02:17:41.601832] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.461 [2024-07-11 02:17:41.678476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.461 [2024-07-11 02:17:41.678525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.461 [2024-07-11 02:17:41.678542] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd6f6d0 00:11:51.461 [2024-07-11 02:17:41.678555] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.461 [2024-07-11 02:17:41.679947] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.461 [2024-07-11 02:17:41.679975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:51.461 [2024-07-11 02:17:41.819215] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:51.462 [2024-07-11 02:17:41.819281] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:51.462 [2024-07-11 02:17:41.819335] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:51.462 [2024-07-11 02:17:41.819405] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:51.462 [2024-07-11 02:17:41.819481] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:51.462 [2024-07-11 02:17:41.819510] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:51.462 [2024-07-11 02:17:41.819573] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:51.462 00:11:51.462 [2024-07-11 02:17:41.819613] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:51.719 00:11:51.719 real 0m0.944s 00:11:51.719 user 0m0.587s 00:11:51.719 sys 0m0.323s 00:11:51.719 02:17:42 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.719 02:17:42 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:51.719 ************************************ 00:11:51.719 END TEST bdev_hello_world 00:11:51.719 ************************************ 00:11:51.978 02:17:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:51.978 02:17:42 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:11:51.978 02:17:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:51.978 02:17:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.978 02:17:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:51.978 ************************************ 00:11:51.978 START TEST bdev_bounds 00:11:51.978 ************************************ 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1872774 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1872774' 00:11:51.978 Process bdevio pid: 1872774 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1872774 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1872774 ']' 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.978 02:17:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:51.978 [2024-07-11 02:17:42.270263] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:51.978 [2024-07-11 02:17:42.270327] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872774 ] 00:11:52.237 [2024-07-11 02:17:42.405570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:52.237 [2024-07-11 02:17:42.460646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.237 [2024-07-11 02:17:42.460747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.237 [2024-07-11 02:17:42.460747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.237 [2024-07-11 02:17:42.608883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:52.237 [2024-07-11 02:17:42.608936] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:52.237 [2024-07-11 02:17:42.608950] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:52.237 [2024-07-11 02:17:42.616888] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.237 [2024-07-11 02:17:42.616915] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.237 [2024-07-11 02:17:42.624897] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:52.237 [2024-07-11 02:17:42.624924] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:52.497 [2024-07-11 02:17:42.701772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:52.497 [2024-07-11 02:17:42.701822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.497 [2024-07-11 02:17:42.701838] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x250cfb0 00:11:52.497 [2024-07-11 02:17:42.701851] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.497 [2024-07-11 02:17:42.703276] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.497 [2024-07-11 02:17:42.703303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:53.066 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.066 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:11:53.066 02:17:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:53.066 I/O targets: 00:11:53.066 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:53.066 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:53.066 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:53.066 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:53.066 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:53.066 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:53.066 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:53.066 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:53.066 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:53.066 00:11:53.066 00:11:53.066 CUnit - A unit testing framework for C - Version 2.1-3 00:11:53.066 http://cunit.sourceforge.net/ 00:11:53.066 00:11:53.066 00:11:53.066 Suite: bdevio tests on: AIO0 00:11:53.066 Test: blockdev write read block ...passed 00:11:53.066 Test: blockdev write zeroes read block ...passed 00:11:53.066 Test: blockdev write zeroes read no split ...passed 00:11:53.066 Test: blockdev write zeroes read split ...passed 00:11:53.066 Test: blockdev write zeroes read split partial ...passed 00:11:53.066 Test: blockdev reset ...passed 00:11:53.066 Test: blockdev write read 8 blocks ...passed 00:11:53.066 Test: blockdev write read size > 128k ...passed 00:11:53.066 Test: blockdev write read invalid size ...passed 00:11:53.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.066 Test: blockdev write read max offset ...passed 00:11:53.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.066 Test: blockdev writev readv 8 blocks ...passed 00:11:53.066 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.066 Test: blockdev writev readv block ...passed 00:11:53.066 Test: blockdev writev readv size > 128k ...passed 00:11:53.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.066 Test: blockdev comparev and writev ...passed 00:11:53.066 Test: blockdev nvme passthru rw ...passed 00:11:53.066 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.066 Test: blockdev nvme admin passthru ...passed 00:11:53.066 Test: blockdev copy ...passed 00:11:53.066 Suite: bdevio tests on: raid1 00:11:53.066 Test: blockdev write read block ...passed 00:11:53.066 Test: blockdev write zeroes read block ...passed 00:11:53.066 Test: blockdev write zeroes read no split ...passed 00:11:53.066 Test: blockdev write zeroes read split ...passed 00:11:53.066 Test: blockdev write zeroes read split partial ...passed 00:11:53.066 Test: blockdev reset ...passed 00:11:53.066 Test: blockdev write read 8 blocks ...passed 00:11:53.066 Test: blockdev write read size > 128k ...passed 00:11:53.066 Test: blockdev write read invalid size ...passed 00:11:53.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.066 Test: blockdev write read max offset ...passed 00:11:53.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.066 Test: blockdev writev readv 8 blocks ...passed 00:11:53.066 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.066 Test: blockdev writev readv block ...passed 00:11:53.066 Test: blockdev writev readv size > 128k ...passed 00:11:53.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.066 Test: blockdev comparev and writev ...passed 00:11:53.066 Test: blockdev nvme passthru rw ...passed 00:11:53.066 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.066 Test: blockdev nvme admin passthru ...passed 00:11:53.066 Test: blockdev copy ...passed 00:11:53.066 Suite: bdevio tests on: concat0 00:11:53.066 Test: blockdev write read block ...passed 00:11:53.066 Test: blockdev write zeroes read block ...passed 00:11:53.066 Test: blockdev write zeroes read no split ...passed 00:11:53.066 Test: blockdev write zeroes read split ...passed 00:11:53.066 Test: blockdev write zeroes read split partial ...passed 00:11:53.066 Test: blockdev reset ...passed 00:11:53.066 Test: blockdev write read 8 blocks ...passed 00:11:53.066 Test: blockdev write read size > 128k ...passed 00:11:53.066 Test: blockdev write read invalid size ...passed 00:11:53.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.066 Test: blockdev write read max offset ...passed 00:11:53.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.066 Test: blockdev writev readv 8 blocks ...passed 00:11:53.066 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.066 Test: blockdev writev readv block ...passed 00:11:53.066 Test: blockdev writev readv size > 128k ...passed 00:11:53.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.066 Test: blockdev comparev and writev ...passed 00:11:53.066 Test: blockdev nvme passthru rw ...passed 00:11:53.066 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.066 Test: blockdev nvme admin passthru ...passed 00:11:53.066 Test: blockdev copy ...passed 00:11:53.066 Suite: bdevio tests on: raid0 00:11:53.066 Test: blockdev write read block ...passed 00:11:53.066 Test: blockdev write zeroes read block ...passed 00:11:53.066 Test: blockdev write zeroes read no split ...passed 00:11:53.066 Test: blockdev write zeroes read split ...passed 00:11:53.066 Test: blockdev write zeroes read split partial ...passed 00:11:53.066 Test: blockdev reset ...passed 00:11:53.066 Test: blockdev write read 8 blocks ...passed 00:11:53.066 Test: blockdev write read size > 128k ...passed 00:11:53.066 Test: blockdev write read invalid size ...passed 00:11:53.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.066 Test: blockdev write read max offset ...passed 00:11:53.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.066 Test: blockdev writev readv 8 blocks ...passed 00:11:53.066 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.066 Test: blockdev writev readv block ...passed 00:11:53.066 Test: blockdev writev readv size > 128k ...passed 00:11:53.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.066 Test: blockdev comparev and writev ...passed 00:11:53.066 Test: blockdev nvme passthru rw ...passed 00:11:53.066 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.066 Test: blockdev nvme admin passthru ...passed 00:11:53.066 Test: blockdev copy ...passed 00:11:53.066 Suite: bdevio tests on: TestPT 00:11:53.066 Test: blockdev write read block ...passed 00:11:53.066 Test: blockdev write zeroes read block ...passed 00:11:53.066 Test: blockdev write zeroes read no split ...passed 00:11:53.066 Test: blockdev write zeroes read split ...passed 00:11:53.066 Test: blockdev write zeroes read split partial ...passed 00:11:53.066 Test: blockdev reset ...passed 00:11:53.066 Test: blockdev write read 8 blocks ...passed 00:11:53.066 Test: blockdev write read size > 128k ...passed 00:11:53.066 Test: blockdev write read invalid size ...passed 00:11:53.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.066 Test: blockdev write read max offset ...passed 00:11:53.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.066 Test: blockdev writev readv 8 blocks ...passed 00:11:53.066 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.066 Test: blockdev writev readv block ...passed 00:11:53.066 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p7 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p6 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p5 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p4 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p3 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p2 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p1 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc2p0 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc1p1 00:11:53.327 Test: blockdev write read block ...passed 00:11:53.327 Test: blockdev write zeroes read block ...passed 00:11:53.327 Test: blockdev write zeroes read no split ...passed 00:11:53.327 Test: blockdev write zeroes read split ...passed 00:11:53.327 Test: blockdev write zeroes read split partial ...passed 00:11:53.327 Test: blockdev reset ...passed 00:11:53.327 Test: blockdev write read 8 blocks ...passed 00:11:53.327 Test: blockdev write read size > 128k ...passed 00:11:53.327 Test: blockdev write read invalid size ...passed 00:11:53.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.327 Test: blockdev write read max offset ...passed 00:11:53.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.327 Test: blockdev writev readv 8 blocks ...passed 00:11:53.327 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.327 Test: blockdev writev readv block ...passed 00:11:53.327 Test: blockdev writev readv size > 128k ...passed 00:11:53.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.327 Test: blockdev comparev and writev ...passed 00:11:53.327 Test: blockdev nvme passthru rw ...passed 00:11:53.327 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.327 Test: blockdev nvme admin passthru ...passed 00:11:53.327 Test: blockdev copy ...passed 00:11:53.327 Suite: bdevio tests on: Malloc1p0 00:11:53.328 Test: blockdev write read block ...passed 00:11:53.328 Test: blockdev write zeroes read block ...passed 00:11:53.328 Test: blockdev write zeroes read no split ...passed 00:11:53.328 Test: blockdev write zeroes read split ...passed 00:11:53.328 Test: blockdev write zeroes read split partial ...passed 00:11:53.328 Test: blockdev reset ...passed 00:11:53.328 Test: blockdev write read 8 blocks ...passed 00:11:53.328 Test: blockdev write read size > 128k ...passed 00:11:53.328 Test: blockdev write read invalid size ...passed 00:11:53.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.328 Test: blockdev write read max offset ...passed 00:11:53.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.328 Test: blockdev writev readv 8 blocks ...passed 00:11:53.328 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.328 Test: blockdev writev readv block ...passed 00:11:53.328 Test: blockdev writev readv size > 128k ...passed 00:11:53.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.328 Test: blockdev comparev and writev ...passed 00:11:53.328 Test: blockdev nvme passthru rw ...passed 00:11:53.328 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.328 Test: blockdev nvme admin passthru ...passed 00:11:53.328 Test: blockdev copy ...passed 00:11:53.328 Suite: bdevio tests on: Malloc0 00:11:53.328 Test: blockdev write read block ...passed 00:11:53.328 Test: blockdev write zeroes read block ...passed 00:11:53.328 Test: blockdev write zeroes read no split ...passed 00:11:53.328 Test: blockdev write zeroes read split ...passed 00:11:53.328 Test: blockdev write zeroes read split partial ...passed 00:11:53.328 Test: blockdev reset ...passed 00:11:53.328 Test: blockdev write read 8 blocks ...passed 00:11:53.328 Test: blockdev write read size > 128k ...passed 00:11:53.328 Test: blockdev write read invalid size ...passed 00:11:53.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.328 Test: blockdev write read max offset ...passed 00:11:53.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.328 Test: blockdev writev readv 8 blocks ...passed 00:11:53.328 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.328 Test: blockdev writev readv block ...passed 00:11:53.328 Test: blockdev writev readv size > 128k ...passed 00:11:53.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.328 Test: blockdev comparev and writev ...passed 00:11:53.328 Test: blockdev nvme passthru rw ...passed 00:11:53.328 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.328 Test: blockdev nvme admin passthru ...passed 00:11:53.328 Test: blockdev copy ...passed 00:11:53.328 00:11:53.328 Run Summary: Type Total Ran Passed Failed Inactive 00:11:53.328 suites 16 16 n/a 0 0 00:11:53.328 tests 368 368 368 0 0 00:11:53.328 asserts 2224 2224 2224 0 n/a 00:11:53.328 00:11:53.328 Elapsed time = 0.653 seconds 00:11:53.328 0 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1872774 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1872774 ']' 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1872774 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1872774 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1872774' 00:11:53.328 killing process with pid 1872774 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1872774 00:11:53.328 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1872774 00:11:53.586 02:17:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:11:53.586 00:11:53.586 real 0m1.766s 00:11:53.586 user 0m4.500s 00:11:53.586 sys 0m0.505s 00:11:53.586 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.586 02:17:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:53.586 ************************************ 00:11:53.586 END TEST bdev_bounds 00:11:53.586 ************************************ 00:11:53.845 02:17:44 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:53.845 02:17:44 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:53.845 02:17:44 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:53.845 02:17:44 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.845 02:17:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:53.845 ************************************ 00:11:53.845 START TEST bdev_nbd 00:11:53.845 ************************************ 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1873039 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1873039 /var/tmp/spdk-nbd.sock 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1873039 ']' 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:53.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.845 02:17:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:53.845 [2024-07-11 02:17:44.134581] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:53.845 [2024-07-11 02:17:44.134652] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.105 [2024-07-11 02:17:44.275232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.105 [2024-07-11 02:17:44.327547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.105 [2024-07-11 02:17:44.478581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:54.105 [2024-07-11 02:17:44.478645] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:54.105 [2024-07-11 02:17:44.478660] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:54.105 [2024-07-11 02:17:44.486589] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.105 [2024-07-11 02:17:44.486617] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.105 [2024-07-11 02:17:44.494601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.105 [2024-07-11 02:17:44.494624] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.364 [2024-07-11 02:17:44.571390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:54.364 [2024-07-11 02:17:44.571441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.364 [2024-07-11 02:17:44.571457] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26ac270 00:11:54.364 [2024-07-11 02:17:44.571470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.364 [2024-07-11 02:17:44.572864] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.364 [2024-07-11 02:17:44.572894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:54.623 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.192 1+0 records in 00:11:55.192 1+0 records out 00:11:55.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027791 s, 14.7 MB/s 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.192 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.451 1+0 records in 00:11:55.451 1+0 records out 00:11:55.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294985 s, 13.9 MB/s 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.451 02:17:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:55.710 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:55.710 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:55.710 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:55.710 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:55.710 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:55.710 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.711 1+0 records in 00:11:55.711 1+0 records out 00:11:55.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281394 s, 14.6 MB/s 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.711 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.970 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.536 1+0 records in 00:11:56.536 1+0 records out 00:11:56.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344105 s, 11.9 MB/s 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:56.536 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.796 1+0 records in 00:11:56.796 1+0 records out 00:11:56.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330683 s, 12.4 MB/s 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.796 02:17:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.055 1+0 records in 00:11:57.055 1+0 records out 00:11:57.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455047 s, 9.0 MB/s 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.055 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.314 1+0 records in 00:11:57.314 1+0 records out 00:11:57.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397596 s, 10.3 MB/s 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.314 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.576 1+0 records in 00:11:57.576 1+0 records out 00:11:57.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498393 s, 8.2 MB/s 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.576 02:17:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.938 1+0 records in 00:11:57.938 1+0 records out 00:11:57.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053163 s, 7.7 MB/s 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.938 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.198 1+0 records in 00:11:58.198 1+0 records out 00:11:58.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601387 s, 6.8 MB/s 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:58.198 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:58.457 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.458 1+0 records in 00:11:58.458 1+0 records out 00:11:58.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461037 s, 8.9 MB/s 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:58.458 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:58.717 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:58.717 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:58.717 02:17:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.718 1+0 records in 00:11:58.718 1+0 records out 00:11:58.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474225 s, 8.6 MB/s 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:58.718 02:17:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.718 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:58.718 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:58.718 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.718 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:58.718 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.977 1+0 records in 00:11:58.977 1+0 records out 00:11:58.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660107 s, 6.2 MB/s 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:58.977 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.237 1+0 records in 00:11:59.237 1+0 records out 00:11:59.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814636 s, 5.0 MB/s 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:59.237 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:59.496 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.496 1+0 records in 00:11:59.496 1+0 records out 00:11:59.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609304 s, 6.7 MB/s 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:59.497 02:17:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.756 1+0 records in 00:11:59.756 1+0 records out 00:11:59.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791594 s, 5.2 MB/s 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:59.756 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:00.015 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:00.015 { 00:12:00.016 "nbd_device": "/dev/nbd0", 00:12:00.016 "bdev_name": "Malloc0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd1", 00:12:00.016 "bdev_name": "Malloc1p0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd2", 00:12:00.016 "bdev_name": "Malloc1p1" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd3", 00:12:00.016 "bdev_name": "Malloc2p0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd4", 00:12:00.016 "bdev_name": "Malloc2p1" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd5", 00:12:00.016 "bdev_name": "Malloc2p2" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd6", 00:12:00.016 "bdev_name": "Malloc2p3" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd7", 00:12:00.016 "bdev_name": "Malloc2p4" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd8", 00:12:00.016 "bdev_name": "Malloc2p5" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd9", 00:12:00.016 "bdev_name": "Malloc2p6" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd10", 00:12:00.016 "bdev_name": "Malloc2p7" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd11", 00:12:00.016 "bdev_name": "TestPT" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd12", 00:12:00.016 "bdev_name": "raid0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd13", 00:12:00.016 "bdev_name": "concat0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd14", 00:12:00.016 "bdev_name": "raid1" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd15", 00:12:00.016 "bdev_name": "AIO0" 00:12:00.016 } 00:12:00.016 ]' 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd0", 00:12:00.016 "bdev_name": "Malloc0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd1", 00:12:00.016 "bdev_name": "Malloc1p0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd2", 00:12:00.016 "bdev_name": "Malloc1p1" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd3", 00:12:00.016 "bdev_name": "Malloc2p0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd4", 00:12:00.016 "bdev_name": "Malloc2p1" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd5", 00:12:00.016 "bdev_name": "Malloc2p2" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd6", 00:12:00.016 "bdev_name": "Malloc2p3" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd7", 00:12:00.016 "bdev_name": "Malloc2p4" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd8", 00:12:00.016 "bdev_name": "Malloc2p5" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd9", 00:12:00.016 "bdev_name": "Malloc2p6" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd10", 00:12:00.016 "bdev_name": "Malloc2p7" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd11", 00:12:00.016 "bdev_name": "TestPT" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd12", 00:12:00.016 "bdev_name": "raid0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd13", 00:12:00.016 "bdev_name": "concat0" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd14", 00:12:00.016 "bdev_name": "raid1" 00:12:00.016 }, 00:12:00.016 { 00:12:00.016 "nbd_device": "/dev/nbd15", 00:12:00.016 "bdev_name": "AIO0" 00:12:00.016 } 00:12:00.016 ]' 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.016 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.276 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.535 02:17:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:00.794 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.053 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.312 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.571 02:17:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.831 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:02.400 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:02.400 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:02.400 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:02.400 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.400 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.400 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:02.400 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.401 02:17:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.660 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:02.919 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:02.919 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:02.919 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:02.919 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.919 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.919 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:03.179 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.179 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.179 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.179 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.438 02:17:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.697 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.957 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.216 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.477 02:17:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:04.740 /dev/nbd0 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.740 1+0 records in 00:12:04.740 1+0 records out 00:12:04.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257923 s, 15.9 MB/s 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.740 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:04.999 /dev/nbd1 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.260 1+0 records in 00:12:05.260 1+0 records out 00:12:05.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305371 s, 13.4 MB/s 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:05.260 /dev/nbd10 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.260 1+0 records in 00:12:05.260 1+0 records out 00:12:05.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318712 s, 12.9 MB/s 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.260 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:05.521 /dev/nbd11 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.521 1+0 records in 00:12:05.521 1+0 records out 00:12:05.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354198 s, 11.6 MB/s 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.521 02:17:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:05.781 /dev/nbd12 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.781 1+0 records in 00:12:05.781 1+0 records out 00:12:05.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004224 s, 9.7 MB/s 00:12:05.781 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.041 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:06.041 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.041 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:06.041 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:06.041 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.041 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.041 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:06.041 /dev/nbd13 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:06.300 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.300 1+0 records in 00:12:06.300 1+0 records out 00:12:06.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445558 s, 9.2 MB/s 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.301 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:06.560 /dev/nbd14 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.560 1+0 records in 00:12:06.560 1+0 records out 00:12:06.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436111 s, 9.4 MB/s 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.560 02:17:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:06.819 /dev/nbd15 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.819 1+0 records in 00:12:06.819 1+0 records out 00:12:06.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474558 s, 8.6 MB/s 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.819 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:07.079 /dev/nbd2 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.079 1+0 records in 00:12:07.079 1+0 records out 00:12:07.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598835 s, 6.8 MB/s 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.079 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:07.339 /dev/nbd3 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.339 1+0 records in 00:12:07.339 1+0 records out 00:12:07.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652718 s, 6.3 MB/s 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.339 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:07.598 /dev/nbd4 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.598 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.598 1+0 records in 00:12:07.599 1+0 records out 00:12:07.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048473 s, 8.5 MB/s 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.599 02:17:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:07.858 /dev/nbd5 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.858 1+0 records in 00:12:07.858 1+0 records out 00:12:07.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064586 s, 6.3 MB/s 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.858 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:08.117 /dev/nbd6 00:12:08.117 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:08.117 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:08.117 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:08.117 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.118 1+0 records in 00:12:08.118 1+0 records out 00:12:08.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587859 s, 7.0 MB/s 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.118 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:08.377 /dev/nbd7 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.377 1+0 records in 00:12:08.377 1+0 records out 00:12:08.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522894 s, 7.8 MB/s 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.377 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:08.638 /dev/nbd8 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.638 1+0 records in 00:12:08.638 1+0 records out 00:12:08.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637986 s, 6.4 MB/s 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.638 02:17:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:08.898 /dev/nbd9 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.898 1+0 records in 00:12:08.898 1+0 records out 00:12:08.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737811 s, 5.6 MB/s 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.898 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd0", 00:12:09.157 "bdev_name": "Malloc0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd1", 00:12:09.157 "bdev_name": "Malloc1p0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd10", 00:12:09.157 "bdev_name": "Malloc1p1" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd11", 00:12:09.157 "bdev_name": "Malloc2p0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd12", 00:12:09.157 "bdev_name": "Malloc2p1" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd13", 00:12:09.157 "bdev_name": "Malloc2p2" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd14", 00:12:09.157 "bdev_name": "Malloc2p3" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd15", 00:12:09.157 "bdev_name": "Malloc2p4" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd2", 00:12:09.157 "bdev_name": "Malloc2p5" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd3", 00:12:09.157 "bdev_name": "Malloc2p6" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd4", 00:12:09.157 "bdev_name": "Malloc2p7" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd5", 00:12:09.157 "bdev_name": "TestPT" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd6", 00:12:09.157 "bdev_name": "raid0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd7", 00:12:09.157 "bdev_name": "concat0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd8", 00:12:09.157 "bdev_name": "raid1" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd9", 00:12:09.157 "bdev_name": "AIO0" 00:12:09.157 } 00:12:09.157 ]' 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd0", 00:12:09.157 "bdev_name": "Malloc0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd1", 00:12:09.157 "bdev_name": "Malloc1p0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd10", 00:12:09.157 "bdev_name": "Malloc1p1" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd11", 00:12:09.157 "bdev_name": "Malloc2p0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd12", 00:12:09.157 "bdev_name": "Malloc2p1" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd13", 00:12:09.157 "bdev_name": "Malloc2p2" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd14", 00:12:09.157 "bdev_name": "Malloc2p3" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd15", 00:12:09.157 "bdev_name": "Malloc2p4" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd2", 00:12:09.157 "bdev_name": "Malloc2p5" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd3", 00:12:09.157 "bdev_name": "Malloc2p6" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd4", 00:12:09.157 "bdev_name": "Malloc2p7" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd5", 00:12:09.157 "bdev_name": "TestPT" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd6", 00:12:09.157 "bdev_name": "raid0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd7", 00:12:09.157 "bdev_name": "concat0" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd8", 00:12:09.157 "bdev_name": "raid1" 00:12:09.157 }, 00:12:09.157 { 00:12:09.157 "nbd_device": "/dev/nbd9", 00:12:09.157 "bdev_name": "AIO0" 00:12:09.157 } 00:12:09.157 ]' 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:09.157 /dev/nbd1 00:12:09.157 /dev/nbd10 00:12:09.157 /dev/nbd11 00:12:09.157 /dev/nbd12 00:12:09.157 /dev/nbd13 00:12:09.157 /dev/nbd14 00:12:09.157 /dev/nbd15 00:12:09.157 /dev/nbd2 00:12:09.157 /dev/nbd3 00:12:09.157 /dev/nbd4 00:12:09.157 /dev/nbd5 00:12:09.157 /dev/nbd6 00:12:09.157 /dev/nbd7 00:12:09.157 /dev/nbd8 00:12:09.157 /dev/nbd9' 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:09.157 /dev/nbd1 00:12:09.157 /dev/nbd10 00:12:09.157 /dev/nbd11 00:12:09.157 /dev/nbd12 00:12:09.157 /dev/nbd13 00:12:09.157 /dev/nbd14 00:12:09.157 /dev/nbd15 00:12:09.157 /dev/nbd2 00:12:09.157 /dev/nbd3 00:12:09.157 /dev/nbd4 00:12:09.157 /dev/nbd5 00:12:09.157 /dev/nbd6 00:12:09.157 /dev/nbd7 00:12:09.157 /dev/nbd8 00:12:09.157 /dev/nbd9' 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:12:09.157 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:09.158 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:09.158 256+0 records in 00:12:09.158 256+0 records out 00:12:09.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106722 s, 98.3 MB/s 00:12:09.158 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.158 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:09.416 256+0 records in 00:12:09.416 256+0 records out 00:12:09.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181663 s, 5.8 MB/s 00:12:09.416 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.416 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:09.675 256+0 records in 00:12:09.675 256+0 records out 00:12:09.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185338 s, 5.7 MB/s 00:12:09.675 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.675 02:17:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:09.935 256+0 records in 00:12:09.935 256+0 records out 00:12:09.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189771 s, 5.5 MB/s 00:12:09.935 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.935 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:09.935 256+0 records in 00:12:09.935 256+0 records out 00:12:09.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166054 s, 6.3 MB/s 00:12:09.935 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.935 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:10.194 256+0 records in 00:12:10.194 256+0 records out 00:12:10.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184925 s, 5.7 MB/s 00:12:10.194 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.194 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:10.454 256+0 records in 00:12:10.454 256+0 records out 00:12:10.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185504 s, 5.7 MB/s 00:12:10.454 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.454 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:10.714 256+0 records in 00:12:10.714 256+0 records out 00:12:10.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185079 s, 5.7 MB/s 00:12:10.714 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.714 02:18:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:10.714 256+0 records in 00:12:10.714 256+0 records out 00:12:10.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185271 s, 5.7 MB/s 00:12:10.714 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.714 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:10.976 256+0 records in 00:12:10.976 256+0 records out 00:12:10.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185416 s, 5.7 MB/s 00:12:10.976 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.976 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:11.236 256+0 records in 00:12:11.236 256+0 records out 00:12:11.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161368 s, 6.5 MB/s 00:12:11.236 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.236 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:11.236 256+0 records in 00:12:11.236 256+0 records out 00:12:11.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184913 s, 5.7 MB/s 00:12:11.236 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.236 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:11.496 256+0 records in 00:12:11.496 256+0 records out 00:12:11.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184398 s, 5.7 MB/s 00:12:11.496 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.496 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:11.755 256+0 records in 00:12:11.755 256+0 records out 00:12:11.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184516 s, 5.7 MB/s 00:12:11.755 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.755 02:18:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:12.014 256+0 records in 00:12:12.014 256+0 records out 00:12:12.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185391 s, 5.7 MB/s 00:12:12.014 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:12.014 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:12.014 256+0 records in 00:12:12.014 256+0 records out 00:12:12.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18724 s, 5.6 MB/s 00:12:12.014 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:12.014 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:12.273 256+0 records in 00:12:12.274 256+0 records out 00:12:12.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182258 s, 5.8 MB/s 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.274 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.533 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:12.791 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.791 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.791 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.791 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.791 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.791 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.791 02:18:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.791 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.791 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.791 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.359 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.617 02:18:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.877 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.136 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.394 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.653 02:18:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.912 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.171 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.431 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.691 02:18:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.950 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.209 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.467 02:18:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.726 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.985 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:17.555 02:18:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:17.814 malloc_lvol_verify 00:12:17.814 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:18.073 b5a82d13-e5ed-4d71-adf4-03dc4a751cd2 00:12:18.073 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:18.333 2c9559ba-711d-48c3-9763-535e2d908251 00:12:18.333 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:18.592 /dev/nbd0 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:18.592 mke2fs 1.46.5 (30-Dec-2021) 00:12:18.592 Discarding device blocks: 0/4096 done 00:12:18.592 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:18.592 00:12:18.592 Allocating group tables: 0/1 done 00:12:18.592 Writing inode tables: 0/1 done 00:12:18.592 Creating journal (1024 blocks): done 00:12:18.592 Writing superblocks and filesystem accounting information: 0/1 done 00:12:18.592 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.592 02:18:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1873039 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1873039 ']' 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1873039 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1873039 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1873039' 00:12:18.851 killing process with pid 1873039 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1873039 00:12:18.851 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1873039 00:12:19.110 02:18:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:12:19.110 00:12:19.110 real 0m25.464s 00:12:19.110 user 0m31.281s 00:12:19.110 sys 0m14.738s 00:12:19.110 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.110 02:18:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:19.110 ************************************ 00:12:19.110 END TEST bdev_nbd 00:12:19.110 ************************************ 00:12:19.371 02:18:09 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:19.371 02:18:09 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:12:19.371 02:18:09 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:12:19.371 02:18:09 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:12:19.371 02:18:09 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:12:19.371 02:18:09 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.371 02:18:09 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.371 02:18:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:19.371 ************************************ 00:12:19.371 START TEST bdev_fio 00:12:19.371 ************************************ 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:12:19.371 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.371 02:18:09 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:19.371 ************************************ 00:12:19.371 START TEST bdev_fio_rw_verify 00:12:19.371 ************************************ 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:19.371 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:12:19.372 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:12:19.372 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:19.372 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:19.372 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:12:19.372 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:19.639 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:12:19.639 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:12:19.639 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:12:19.639 02:18:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:19.899 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.899 fio-3.35 00:12:19.899 Starting 16 threads 00:12:32.167 00:12:32.167 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=1877672: Thu Jul 11 02:18:20 2024 00:12:32.167 read: IOPS=85.1k, BW=333MiB/s (349MB/s)(3326MiB/10001msec) 00:12:32.167 slat (usec): min=2, max=213, avg=37.51, stdev=14.92 00:12:32.167 clat (usec): min=10, max=1425, avg=307.34, stdev=136.95 00:12:32.167 lat (usec): min=21, max=1495, avg=344.85, stdev=144.96 00:12:32.167 clat percentiles (usec): 00:12:32.167 | 50.000th=[ 302], 99.000th=[ 611], 99.900th=[ 750], 99.990th=[ 1270], 00:12:32.167 | 99.999th=[ 1369] 00:12:32.167 write: IOPS=135k, BW=527MiB/s (553MB/s)(5206MiB/9878msec); 0 zone resets 00:12:32.167 slat (usec): min=8, max=337, avg=50.96, stdev=17.93 00:12:32.167 clat (usec): min=11, max=4258, avg=361.25, stdev=171.23 00:12:32.167 lat (usec): min=36, max=4302, avg=412.22, stdev=181.57 00:12:32.167 clat percentiles (usec): 00:12:32.167 | 50.000th=[ 347], 99.000th=[ 971], 99.900th=[ 1336], 99.990th=[ 1434], 00:12:32.167 | 99.999th=[ 1926] 00:12:32.167 bw ( KiB/s): min=466744, max=678813, per=99.10%, avg=534788.11, stdev=3065.51, samples=304 00:12:32.167 iops : min=116686, max=169702, avg=133696.84, stdev=766.37, samples=304 00:12:32.167 lat (usec) : 20=0.01%, 50=0.34%, 100=3.63%, 250=28.25%, 500=52.51% 00:12:32.167 lat (usec) : 750=14.20%, 1000=0.50% 00:12:32.167 lat (msec) : 2=0.57%, 4=0.01%, 10=0.01% 00:12:32.167 cpu : usr=99.19%, sys=0.37%, ctx=724, majf=0, minf=2757 00:12:32.167 IO depths : 1=12.4%, 2=24.8%, 4=50.2%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.167 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.167 issued rwts: total=851540,1332695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:32.167 00:12:32.167 Run status group 0 (all jobs): 00:12:32.167 READ: bw=333MiB/s (349MB/s), 333MiB/s-333MiB/s (349MB/s-349MB/s), io=3326MiB (3488MB), run=10001-10001msec 00:12:32.167 WRITE: bw=527MiB/s (553MB/s), 527MiB/s-527MiB/s (553MB/s-553MB/s), io=5206MiB (5459MB), run=9878-9878msec 00:12:32.167 00:12:32.167 real 0m11.449s 00:12:32.167 user 2m44.701s 00:12:32.167 sys 0m1.405s 00:12:32.167 02:18:21 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.167 02:18:21 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:32.167 ************************************ 00:12:32.167 END TEST bdev_fio_rw_verify 00:12:32.167 ************************************ 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:32.167 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:32.169 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "eac639cf-8745-47cf-a863-c95f53c74531"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "eac639cf-8745-47cf-a863-c95f53c74531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "647f4f90-93a2-5a9c-a21d-0231dfe466d2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "647f4f90-93a2-5a9c-a21d-0231dfe466d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1e097a58-cc49-51ce-82fe-daf8eb12d48f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1e097a58-cc49-51ce-82fe-daf8eb12d48f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b2380544-dd6c-54ba-947d-4603ae673ccc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b2380544-dd6c-54ba-947d-4603ae673ccc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "edc7705c-ecfc-5cf3-b63d-89de3f142494"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "edc7705c-ecfc-5cf3-b63d-89de3f142494",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "4024770e-9d14-5637-9dfd-3b802cf7ea34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4024770e-9d14-5637-9dfd-3b802cf7ea34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "eed8e0e1-4003-54ee-a423-2b9b3d78daac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eed8e0e1-4003-54ee-a423-2b9b3d78daac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e9d9fee2-f10b-5b6f-be99-2ee03778fdae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e9d9fee2-f10b-5b6f-be99-2ee03778fdae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "19b100be-13c5-5345-a63a-2e2663d34245"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19b100be-13c5-5345-a63a-2e2663d34245",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "c38b679e-384a-529e-a986-79d577e1d80f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c38b679e-384a-529e-a986-79d577e1d80f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c60b8d1f-e483-5d99-b116-d937d08ab2fb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c60b8d1f-e483-5d99-b116-d937d08ab2fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "579f4ca9-0089-5597-8289-251ce3a8ec1d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "579f4ca9-0089-5597-8289-251ce3a8ec1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8724a74c-99f2-4913-81e6-b34cecbb4f5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "14c633e2-ee3c-4876-92d2-59b502bd06fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3d31f469-26b8-4927-afa3-4df6438a48fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4a73af42-0046-49a5-ae91-514b31baa29b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "f986ff18-4c81-490f-80c6-bda034326949"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f986ff18-4c81-490f-80c6-bda034326949",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f986ff18-4c81-490f-80c6-bda034326949",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "89b5a896-edf0-4d50-9e57-b290524841cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "91da19cb-52bf-4a78-92b6-a03b6b9dac0b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "46ace2a0-d84c-42b3-baac-f03661a7e125"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "46ace2a0-d84c-42b3-baac-f03661a7e125",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:32.169 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:12:32.169 Malloc1p0 00:12:32.169 Malloc1p1 00:12:32.169 Malloc2p0 00:12:32.169 Malloc2p1 00:12:32.169 Malloc2p2 00:12:32.169 Malloc2p3 00:12:32.169 Malloc2p4 00:12:32.169 Malloc2p5 00:12:32.169 Malloc2p6 00:12:32.169 Malloc2p7 00:12:32.169 TestPT 00:12:32.169 raid0 00:12:32.169 concat0 ]] 00:12:32.169 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "eac639cf-8745-47cf-a863-c95f53c74531"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "eac639cf-8745-47cf-a863-c95f53c74531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "647f4f90-93a2-5a9c-a21d-0231dfe466d2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "647f4f90-93a2-5a9c-a21d-0231dfe466d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1e097a58-cc49-51ce-82fe-daf8eb12d48f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1e097a58-cc49-51ce-82fe-daf8eb12d48f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b2380544-dd6c-54ba-947d-4603ae673ccc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b2380544-dd6c-54ba-947d-4603ae673ccc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "edc7705c-ecfc-5cf3-b63d-89de3f142494"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "edc7705c-ecfc-5cf3-b63d-89de3f142494",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "4024770e-9d14-5637-9dfd-3b802cf7ea34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4024770e-9d14-5637-9dfd-3b802cf7ea34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "eed8e0e1-4003-54ee-a423-2b9b3d78daac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eed8e0e1-4003-54ee-a423-2b9b3d78daac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e9d9fee2-f10b-5b6f-be99-2ee03778fdae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e9d9fee2-f10b-5b6f-be99-2ee03778fdae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "19b100be-13c5-5345-a63a-2e2663d34245"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19b100be-13c5-5345-a63a-2e2663d34245",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "c38b679e-384a-529e-a986-79d577e1d80f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c38b679e-384a-529e-a986-79d577e1d80f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c60b8d1f-e483-5d99-b116-d937d08ab2fb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c60b8d1f-e483-5d99-b116-d937d08ab2fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "579f4ca9-0089-5597-8289-251ce3a8ec1d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "579f4ca9-0089-5597-8289-251ce3a8ec1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1c80413c-d6bc-45e2-b61f-9f6e8875a6b8",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8724a74c-99f2-4913-81e6-b34cecbb4f5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "14c633e2-ee3c-4876-92d2-59b502bd06fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "91b30ff2-b86e-40fa-b54e-2e5ed5b05ab9",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3d31f469-26b8-4927-afa3-4df6438a48fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4a73af42-0046-49a5-ae91-514b31baa29b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "f986ff18-4c81-490f-80c6-bda034326949"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f986ff18-4c81-490f-80c6-bda034326949",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f986ff18-4c81-490f-80c6-bda034326949",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "89b5a896-edf0-4d50-9e57-b290524841cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "91da19cb-52bf-4a78-92b6-a03b6b9dac0b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "46ace2a0-d84c-42b3-baac-f03661a7e125"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "46ace2a0-d84c-42b3-baac-f03661a7e125",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.170 02:18:21 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:32.170 ************************************ 00:12:32.170 START TEST bdev_fio_trim 00:12:32.170 ************************************ 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:12:32.170 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:32.171 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:32.171 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:12:32.171 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:32.171 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:12:32.171 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:12:32.171 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:12:32.171 02:18:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:32.171 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:32.171 fio-3.35 00:12:32.171 Starting 14 threads 00:12:44.374 00:12:44.374 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=1879377: Thu Jul 11 02:18:32 2024 00:12:44.374 write: IOPS=120k, BW=469MiB/s (491MB/s)(4686MiB/10001msec); 0 zone resets 00:12:44.374 slat (usec): min=2, max=760, avg=41.34, stdev=11.25 00:12:44.374 clat (usec): min=33, max=3514, avg=291.90, stdev=101.70 00:12:44.374 lat (usec): min=46, max=3548, avg=333.23, stdev=106.41 00:12:44.374 clat percentiles (usec): 00:12:44.374 | 50.000th=[ 281], 99.000th=[ 529], 99.900th=[ 586], 99.990th=[ 644], 00:12:44.374 | 99.999th=[ 1172] 00:12:44.374 bw ( KiB/s): min=427926, max=569789, per=100.00%, avg=480668.11, stdev=2434.10, samples=266 00:12:44.374 iops : min=106981, max=142445, avg=120166.79, stdev=608.51, samples=266 00:12:44.374 trim: IOPS=120k, BW=469MiB/s (491MB/s)(4686MiB/10001msec); 0 zone resets 00:12:44.374 slat (usec): min=4, max=388, avg=27.56, stdev= 7.62 00:12:44.374 clat (usec): min=5, max=3548, avg=332.56, stdev=107.68 00:12:44.374 lat (usec): min=20, max=3577, avg=360.12, stdev=111.30 00:12:44.374 clat percentiles (usec): 00:12:44.374 | 50.000th=[ 326], 99.000th=[ 578], 99.900th=[ 644], 99.990th=[ 709], 00:12:44.374 | 99.999th=[ 1237] 00:12:44.374 bw ( KiB/s): min=427926, max=569797, per=100.00%, avg=480668.11, stdev=2434.19, samples=266 00:12:44.374 iops : min=106981, max=142447, avg=120166.79, stdev=608.53, samples=266 00:12:44.374 lat (usec) : 10=0.01%, 20=0.01%, 50=0.04%, 100=0.78%, 250=31.04% 00:12:44.374 lat (usec) : 500=63.79%, 750=4.35%, 1000=0.01% 00:12:44.374 lat (msec) : 2=0.01%, 4=0.01% 00:12:44.374 cpu : usr=99.57%, sys=0.00%, ctx=566, majf=0, minf=1035 00:12:44.374 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:44.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.374 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.374 issued rwts: total=0,1199505,1199511,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:44.374 00:12:44.374 Run status group 0 (all jobs): 00:12:44.374 WRITE: bw=469MiB/s (491MB/s), 469MiB/s-469MiB/s (491MB/s-491MB/s), io=4686MiB (4913MB), run=10001-10001msec 00:12:44.374 TRIM: bw=469MiB/s (491MB/s), 469MiB/s-469MiB/s (491MB/s-491MB/s), io=4686MiB (4913MB), run=10001-10001msec 00:12:44.374 00:12:44.374 real 0m11.524s 00:12:44.374 user 2m25.664s 00:12:44.374 sys 0m0.673s 00:12:44.374 02:18:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.374 02:18:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:12:44.374 ************************************ 00:12:44.374 END TEST bdev_fio_trim 00:12:44.374 ************************************ 00:12:44.374 02:18:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:12:44.374 02:18:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:12:44.374 02:18:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:44.375 02:18:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:12:44.375 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:12:44.375 02:18:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:12:44.375 00:12:44.375 real 0m23.385s 00:12:44.375 user 5m10.597s 00:12:44.375 sys 0m2.288s 00:12:44.375 02:18:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.375 02:18:32 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:44.375 ************************************ 00:12:44.375 END TEST bdev_fio 00:12:44.375 ************************************ 00:12:44.375 02:18:33 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:44.375 02:18:33 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:44.375 02:18:33 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:44.375 02:18:33 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:44.375 02:18:33 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.375 02:18:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:44.375 ************************************ 00:12:44.375 START TEST bdev_verify 00:12:44.375 ************************************ 00:12:44.375 02:18:33 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:44.375 [2024-07-11 02:18:33.138623] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:44.375 [2024-07-11 02:18:33.138673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1880820 ] 00:12:44.375 [2024-07-11 02:18:33.259286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:44.375 [2024-07-11 02:18:33.312093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.375 [2024-07-11 02:18:33.312098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.375 [2024-07-11 02:18:33.462829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:44.375 [2024-07-11 02:18:33.462886] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:44.375 [2024-07-11 02:18:33.462901] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:44.375 [2024-07-11 02:18:33.470836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:44.375 [2024-07-11 02:18:33.470864] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:44.375 [2024-07-11 02:18:33.478845] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:44.375 [2024-07-11 02:18:33.478870] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:44.375 [2024-07-11 02:18:33.555861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:44.375 [2024-07-11 02:18:33.555915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.375 [2024-07-11 02:18:33.555932] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19ad070 00:12:44.375 [2024-07-11 02:18:33.555946] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.375 [2024-07-11 02:18:33.557381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.375 [2024-07-11 02:18:33.557412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:44.375 Running I/O for 5 seconds... 00:12:49.648 00:12:49.648 Latency(us) 00:12:49.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.648 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x0 length 0x1000 00:12:49.648 Malloc0 : 5.19 1134.66 4.43 0.00 0.00 112581.67 527.14 366545.70 00:12:49.648 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x1000 length 0x1000 00:12:49.648 Malloc0 : 5.11 901.34 3.52 0.00 0.00 141689.94 676.73 428548.45 00:12:49.648 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x0 length 0x800 00:12:49.648 Malloc1p0 : 5.19 591.73 2.31 0.00 0.00 215322.97 2464.72 181449.24 00:12:49.648 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x800 length 0x800 00:12:49.648 Malloc1p0 : 5.12 475.45 1.86 0.00 0.00 267773.59 3162.82 231598.53 00:12:49.648 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x0 length 0x800 00:12:49.648 Malloc1p1 : 5.19 591.49 2.31 0.00 0.00 214965.39 2478.97 181449.24 00:12:49.648 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x800 length 0x800 00:12:49.648 Malloc1p1 : 5.12 475.20 1.86 0.00 0.00 267228.97 3219.81 231598.53 00:12:49.648 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x0 length 0x200 00:12:49.648 Malloc2p0 : 5.20 591.24 2.31 0.00 0.00 214574.26 2507.46 182361.04 00:12:49.648 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x200 length 0x200 00:12:49.648 Malloc2p0 : 5.12 474.95 1.86 0.00 0.00 266659.37 3960.65 230686.72 00:12:49.648 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x0 length 0x200 00:12:49.648 Malloc2p1 : 5.20 591.00 2.31 0.00 0.00 214203.16 3447.76 177802.02 00:12:49.648 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x200 length 0x200 00:12:49.648 Malloc2p1 : 5.26 487.11 1.90 0.00 0.00 259211.34 3903.67 227951.30 00:12:49.648 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x0 length 0x200 00:12:49.648 Malloc2p2 : 5.20 590.76 2.31 0.00 0.00 213713.36 3348.03 173242.99 00:12:49.648 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x200 length 0x200 00:12:49.648 Malloc2p2 : 5.26 486.88 1.90 0.00 0.00 258498.65 3162.82 226127.69 00:12:49.648 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.648 Verification LBA range: start 0x0 length 0x200 00:12:49.649 Malloc2p3 : 5.20 590.52 2.31 0.00 0.00 213248.12 2464.72 173242.99 00:12:49.649 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x200 length 0x200 00:12:49.649 Malloc2p3 : 5.26 486.65 1.90 0.00 0.00 257973.71 3205.57 226127.69 00:12:49.649 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x200 00:12:49.649 Malloc2p4 : 5.20 590.28 2.31 0.00 0.00 212882.62 2464.72 175066.60 00:12:49.649 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x200 length 0x200 00:12:49.649 Malloc2p4 : 5.26 486.42 1.90 0.00 0.00 257432.63 3960.65 227951.30 00:12:49.649 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x200 00:12:49.649 Malloc2p5 : 5.21 590.04 2.30 0.00 0.00 212502.45 2507.46 176890.21 00:12:49.649 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x200 length 0x200 00:12:49.649 Malloc2p5 : 5.27 486.18 1.90 0.00 0.00 256713.67 3960.65 226127.69 00:12:49.649 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x200 00:12:49.649 Malloc2p6 : 5.21 589.80 2.30 0.00 0.00 212125.26 3419.27 175066.60 00:12:49.649 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x200 length 0x200 00:12:49.649 Malloc2p6 : 5.27 485.94 1.90 0.00 0.00 256006.81 3091.59 224304.08 00:12:49.649 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x200 00:12:49.649 Malloc2p7 : 5.21 589.57 2.30 0.00 0.00 211638.56 3462.01 171419.38 00:12:49.649 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x200 length 0x200 00:12:49.649 Malloc2p7 : 5.27 485.71 1.90 0.00 0.00 255388.87 3333.79 217921.45 00:12:49.649 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x1000 00:12:49.649 TestPT : 5.23 587.77 2.30 0.00 0.00 211595.17 12651.30 171419.38 00:12:49.649 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x1000 length 0x1000 00:12:49.649 TestPT : 5.24 463.91 1.81 0.00 0.00 266529.91 49921.34 218833.25 00:12:49.649 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x2000 00:12:49.649 raid0 : 5.22 589.06 2.30 0.00 0.00 210639.22 2806.65 160477.72 00:12:49.649 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x2000 length 0x2000 00:12:49.649 raid0 : 5.27 485.46 1.90 0.00 0.00 254085.38 3476.26 200597.15 00:12:49.649 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x2000 00:12:49.649 concat0 : 5.22 588.82 2.30 0.00 0.00 210232.05 2336.50 162301.33 00:12:49.649 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x2000 length 0x2000 00:12:49.649 concat0 : 5.28 485.23 1.90 0.00 0.00 253372.83 3390.78 193302.71 00:12:49.649 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x1000 00:12:49.649 raid1 : 5.22 588.58 2.30 0.00 0.00 209820.12 3120.08 171419.38 00:12:49.649 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x1000 length 0x1000 00:12:49.649 raid1 : 5.28 484.99 1.89 0.00 0.00 252606.29 4131.62 200597.15 00:12:49.649 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x0 length 0x4e2 00:12:49.649 AIO0 : 5.23 611.84 2.39 0.00 0.00 201388.57 968.79 179625.63 00:12:49.649 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.649 Verification LBA range: start 0x4e2 length 0x4e2 00:12:49.649 AIO0 : 5.28 484.80 1.89 0.00 0.00 251830.07 1666.89 209715.20 00:12:49.649 =================================================================================================================== 00:12:49.649 Total : 18143.39 70.87 0.00 0.00 221016.87 527.14 428548.45 00:12:49.649 00:12:49.649 real 0m6.414s 00:12:49.649 user 0m11.954s 00:12:49.649 sys 0m0.398s 00:12:49.649 02:18:39 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.649 02:18:39 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:49.649 ************************************ 00:12:49.649 END TEST bdev_verify 00:12:49.649 ************************************ 00:12:49.649 02:18:39 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:49.649 02:18:39 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.649 02:18:39 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:49.649 02:18:39 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.649 02:18:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:49.649 ************************************ 00:12:49.649 START TEST bdev_verify_big_io 00:12:49.649 ************************************ 00:12:49.649 02:18:39 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.649 [2024-07-11 02:18:39.645481] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:49.649 [2024-07-11 02:18:39.645549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881648 ] 00:12:49.649 [2024-07-11 02:18:39.783714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:49.649 [2024-07-11 02:18:39.836385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.650 [2024-07-11 02:18:39.836390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.650 [2024-07-11 02:18:39.980976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:49.650 [2024-07-11 02:18:39.981030] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:49.650 [2024-07-11 02:18:39.981044] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:49.650 [2024-07-11 02:18:39.988984] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:49.650 [2024-07-11 02:18:39.989012] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:49.650 [2024-07-11 02:18:39.996997] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:49.650 [2024-07-11 02:18:39.997023] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:49.909 [2024-07-11 02:18:40.074939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:49.909 [2024-07-11 02:18:40.074985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.909 [2024-07-11 02:18:40.075003] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1080070 00:12:49.909 [2024-07-11 02:18:40.075016] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.909 [2024-07-11 02:18:40.076502] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.909 [2024-07-11 02:18:40.076532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:49.909 [2024-07-11 02:18:40.246883] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.248358] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.250458] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.251926] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.253659] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.254730] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.256380] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.258041] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.259125] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.260789] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.261883] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.263343] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.264246] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.265670] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.266594] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.268016] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:49.909 [2024-07-11 02:18:40.293356] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:49.910 [2024-07-11 02:18:40.295444] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:50.168 Running I/O for 5 seconds... 00:12:58.290 00:12:58.290 Latency(us) 00:12:58.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.290 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x100 00:12:58.290 Malloc0 : 6.38 140.43 8.78 0.00 0.00 892943.74 872.63 2159154.75 00:12:58.290 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x100 length 0x100 00:12:58.290 Malloc0 : 6.09 126.06 7.88 0.00 0.00 991283.29 1125.51 2465521.31 00:12:58.290 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x80 00:12:58.290 Malloc1p0 : 6.57 74.31 4.64 0.00 0.00 1596028.87 2478.97 3340854.32 00:12:58.290 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x80 length 0x80 00:12:58.290 Malloc1p0 : 7.19 28.95 1.81 0.00 0.00 3921864.90 1894.85 6068975.53 00:12:58.290 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x80 00:12:58.290 Malloc1p1 : 6.93 32.32 2.02 0.00 0.00 3479711.72 1474.56 5806375.62 00:12:58.290 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x80 length 0x80 00:12:58.290 Malloc1p1 : 7.19 28.94 1.81 0.00 0.00 3763957.81 1816.49 5806375.62 00:12:58.290 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p0 : 6.48 19.76 1.24 0.00 0.00 1427730.80 644.67 2494699.07 00:12:58.290 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p0 : 6.57 19.49 1.22 0.00 0.00 1423820.29 780.02 2684354.56 00:12:58.290 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p1 : 6.48 19.76 1.23 0.00 0.00 1414196.88 633.99 2465521.31 00:12:58.290 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p1 : 6.57 19.48 1.22 0.00 0.00 1409014.02 876.19 2640587.91 00:12:58.290 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p2 : 6.48 19.76 1.23 0.00 0.00 1401500.01 641.11 2436343.54 00:12:58.290 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p2 : 6.57 19.48 1.22 0.00 0.00 1393567.96 776.46 2611410.14 00:12:58.290 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p3 : 6.48 19.75 1.23 0.00 0.00 1387071.90 762.21 2407165.77 00:12:58.290 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p3 : 6.57 19.47 1.22 0.00 0.00 1377948.01 783.58 2582232.38 00:12:58.290 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p4 : 6.48 19.75 1.23 0.00 0.00 1373573.13 633.99 2377988.01 00:12:58.290 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p4 : 6.57 19.47 1.22 0.00 0.00 1362034.85 772.90 2553054.61 00:12:58.290 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p5 : 6.48 19.74 1.23 0.00 0.00 1359948.79 644.67 2348810.24 00:12:58.290 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p5 : 6.58 19.46 1.22 0.00 0.00 1346227.85 772.90 2509287.96 00:12:58.290 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p6 : 6.48 19.74 1.23 0.00 0.00 1345609.22 644.67 2334221.36 00:12:58.290 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p6 : 6.58 19.46 1.22 0.00 0.00 1329985.18 769.34 2480110.19 00:12:58.290 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x20 00:12:58.290 Malloc2p7 : 6.49 19.74 1.23 0.00 0.00 1332212.19 630.43 2305043.59 00:12:58.290 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x20 length 0x20 00:12:58.290 Malloc2p7 : 6.58 19.45 1.22 0.00 0.00 1313520.60 769.34 2450932.42 00:12:58.290 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x100 00:12:58.290 TestPT : 6.93 30.30 1.89 0.00 0.00 3302549.81 121270.09 3822287.47 00:12:58.290 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x100 length 0x100 00:12:58.290 TestPT : 7.22 28.83 1.80 0.00 0.00 3374400.42 97563.16 3720165.29 00:12:58.290 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x200 00:12:58.290 raid0 : 7.01 34.22 2.14 0.00 0.00 2810730.27 1538.67 4931042.62 00:12:58.290 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x200 length 0x200 00:12:58.290 raid0 : 7.08 38.43 2.40 0.00 0.00 2450223.56 1951.83 4843509.31 00:12:58.290 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x200 00:12:58.290 concat0 : 6.93 43.85 2.74 0.00 0.00 2154506.86 1552.92 4726798.25 00:12:58.290 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x200 length 0x200 00:12:58.290 concat0 : 7.16 50.53 3.16 0.00 0.00 1827615.34 1966.08 4639264.95 00:12:58.290 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x100 00:12:58.290 raid1 : 7.02 50.46 3.15 0.00 0.00 1827312.97 2023.07 4522553.88 00:12:58.290 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x100 length 0x100 00:12:58.290 raid1 : 7.19 48.95 3.06 0.00 0.00 1809712.55 2535.96 4405842.81 00:12:58.290 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x0 length 0x4e 00:12:58.290 AIO0 : 7.12 59.82 3.74 0.00 0.00 916565.24 808.51 3530509.80 00:12:58.290 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:58.290 Verification LBA range: start 0x4e length 0x4e 00:12:58.290 AIO0 : 7.35 87.61 5.48 0.00 0.00 598529.86 463.03 3603454.22 00:12:58.291 =================================================================================================================== 00:12:58.291 Total : 1217.78 76.11 0.00 0.00 1661824.43 463.03 6068975.53 00:12:58.291 00:12:58.291 real 0m8.520s 00:12:58.291 user 0m16.097s 00:12:58.291 sys 0m0.445s 00:12:58.291 02:18:48 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.291 02:18:48 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.291 ************************************ 00:12:58.291 END TEST bdev_verify_big_io 00:12:58.291 ************************************ 00:12:58.291 02:18:48 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:58.291 02:18:48 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.291 02:18:48 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:58.291 02:18:48 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.291 02:18:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:58.291 ************************************ 00:12:58.291 START TEST bdev_write_zeroes 00:12:58.291 ************************************ 00:12:58.291 02:18:48 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.291 [2024-07-11 02:18:48.251260] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:58.291 [2024-07-11 02:18:48.251322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882779 ] 00:12:58.291 [2024-07-11 02:18:48.386880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.291 [2024-07-11 02:18:48.438197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.291 [2024-07-11 02:18:48.585554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.291 [2024-07-11 02:18:48.585614] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:58.291 [2024-07-11 02:18:48.585628] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:58.291 [2024-07-11 02:18:48.593561] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.291 [2024-07-11 02:18:48.593588] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.291 [2024-07-11 02:18:48.601574] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.291 [2024-07-11 02:18:48.601599] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.291 [2024-07-11 02:18:48.678277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.291 [2024-07-11 02:18:48.678325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.291 [2024-07-11 02:18:48.678344] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x171d890 00:12:58.291 [2024-07-11 02:18:48.678356] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.291 [2024-07-11 02:18:48.679742] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.291 [2024-07-11 02:18:48.679776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:58.550 Running I/O for 1 seconds... 00:12:59.929 00:12:59.929 Latency(us) 00:12:59.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.929 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc0 : 1.05 5005.76 19.55 0.00 0.00 25544.91 683.85 42854.85 00:12:59.929 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc1p0 : 1.05 4998.66 19.53 0.00 0.00 25535.94 904.68 41943.04 00:12:59.929 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc1p1 : 1.05 4991.62 19.50 0.00 0.00 25515.46 904.68 41031.23 00:12:59.929 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p0 : 1.05 4984.53 19.47 0.00 0.00 25492.02 901.12 40119.43 00:12:59.929 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p1 : 1.05 4977.55 19.44 0.00 0.00 25474.56 897.56 39207.62 00:12:59.929 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p2 : 1.06 4970.58 19.42 0.00 0.00 25452.49 904.68 38295.82 00:12:59.929 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p3 : 1.06 4963.57 19.39 0.00 0.00 25432.91 901.12 37384.01 00:12:59.929 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p4 : 1.06 4956.65 19.36 0.00 0.00 25415.30 901.12 36700.16 00:12:59.929 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p5 : 1.06 4949.74 19.33 0.00 0.00 25397.97 901.12 35788.35 00:12:59.929 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p6 : 1.06 4942.79 19.31 0.00 0.00 25381.14 901.12 34876.55 00:12:59.929 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 Malloc2p7 : 1.06 4935.90 19.28 0.00 0.00 25359.92 890.43 33964.74 00:12:59.929 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 TestPT : 1.06 4929.06 19.25 0.00 0.00 25335.82 933.18 33052.94 00:12:59.929 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 raid0 : 1.07 4921.13 19.22 0.00 0.00 25307.89 1602.78 31229.33 00:12:59.929 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 concat0 : 1.07 4913.41 19.19 0.00 0.00 25252.85 1588.54 29633.67 00:12:59.929 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 raid1 : 1.07 4903.76 19.16 0.00 0.00 25193.06 2535.96 27126.21 00:12:59.929 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.929 AIO0 : 1.07 4897.83 19.13 0.00 0.00 25100.52 1054.27 26100.42 00:12:59.929 =================================================================================================================== 00:12:59.929 Total : 79242.54 309.54 0.00 0.00 25387.05 683.85 42854.85 00:12:59.929 00:12:59.929 real 0m2.146s 00:12:59.929 user 0m1.737s 00:12:59.929 sys 0m0.358s 00:12:59.929 02:18:50 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.929 02:18:50 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:59.929 ************************************ 00:12:59.929 END TEST bdev_write_zeroes 00:12:59.929 ************************************ 00:13:00.188 02:18:50 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:00.188 02:18:50 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.188 02:18:50 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:00.188 02:18:50 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.188 02:18:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:00.188 ************************************ 00:13:00.188 START TEST bdev_json_nonenclosed 00:13:00.188 ************************************ 00:13:00.188 02:18:50 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.188 [2024-07-11 02:18:50.479145] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:00.188 [2024-07-11 02:18:50.479207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883065 ] 00:13:00.447 [2024-07-11 02:18:50.615262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.447 [2024-07-11 02:18:50.664092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.447 [2024-07-11 02:18:50.664158] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:00.447 [2024-07-11 02:18:50.664179] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:00.447 [2024-07-11 02:18:50.664191] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.447 00:13:00.447 real 0m0.326s 00:13:00.447 user 0m0.172s 00:13:00.447 sys 0m0.152s 00:13:00.447 02:18:50 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:13:00.447 02:18:50 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.447 02:18:50 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:00.447 ************************************ 00:13:00.447 END TEST bdev_json_nonenclosed 00:13:00.447 ************************************ 00:13:00.447 02:18:50 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:00.447 02:18:50 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:13:00.447 02:18:50 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.447 02:18:50 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:00.447 02:18:50 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.447 02:18:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:00.447 ************************************ 00:13:00.447 START TEST bdev_json_nonarray 00:13:00.447 ************************************ 00:13:00.447 02:18:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.706 [2024-07-11 02:18:50.884007] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:00.706 [2024-07-11 02:18:50.884071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883170 ] 00:13:00.706 [2024-07-11 02:18:51.018968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.706 [2024-07-11 02:18:51.067238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.706 [2024-07-11 02:18:51.067309] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:00.706 [2024-07-11 02:18:51.067330] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:00.706 [2024-07-11 02:18:51.067342] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.966 00:13:00.966 real 0m0.326s 00:13:00.966 user 0m0.169s 00:13:00.966 sys 0m0.154s 00:13:00.966 02:18:51 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:13:00.966 02:18:51 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.966 02:18:51 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:00.966 ************************************ 00:13:00.966 END TEST bdev_json_nonarray 00:13:00.966 ************************************ 00:13:00.966 02:18:51 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:00.966 02:18:51 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:13:00.966 02:18:51 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:13:00.966 02:18:51 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:13:00.966 02:18:51 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:00.966 02:18:51 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.966 02:18:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:00.966 ************************************ 00:13:00.966 START TEST bdev_qos 00:13:00.966 ************************************ 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=1883195 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 1883195' 00:13:00.966 Process qos testing pid: 1883195 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 1883195 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 1883195 ']' 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.966 02:18:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:00.966 [2024-07-11 02:18:51.299661] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:00.966 [2024-07-11 02:18:51.299725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883195 ] 00:13:01.225 [2024-07-11 02:18:51.443197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.225 [2024-07-11 02:18:51.500183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.163 Malloc_0 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.163 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.163 [ 00:13:02.163 { 00:13:02.163 "name": "Malloc_0", 00:13:02.163 "aliases": [ 00:13:02.163 "84b39e01-711f-4791-a839-78075a8c15a0" 00:13:02.163 ], 00:13:02.163 "product_name": "Malloc disk", 00:13:02.163 "block_size": 512, 00:13:02.163 "num_blocks": 262144, 00:13:02.422 "uuid": "84b39e01-711f-4791-a839-78075a8c15a0", 00:13:02.422 "assigned_rate_limits": { 00:13:02.422 "rw_ios_per_sec": 0, 00:13:02.422 "rw_mbytes_per_sec": 0, 00:13:02.422 "r_mbytes_per_sec": 0, 00:13:02.422 "w_mbytes_per_sec": 0 00:13:02.422 }, 00:13:02.422 "claimed": false, 00:13:02.422 "zoned": false, 00:13:02.422 "supported_io_types": { 00:13:02.422 "read": true, 00:13:02.422 "write": true, 00:13:02.422 "unmap": true, 00:13:02.422 "flush": true, 00:13:02.422 "reset": true, 00:13:02.422 "nvme_admin": false, 00:13:02.422 "nvme_io": false, 00:13:02.422 "nvme_io_md": false, 00:13:02.422 "write_zeroes": true, 00:13:02.422 "zcopy": true, 00:13:02.422 "get_zone_info": false, 00:13:02.422 "zone_management": false, 00:13:02.422 "zone_append": false, 00:13:02.422 "compare": false, 00:13:02.422 "compare_and_write": false, 00:13:02.422 "abort": true, 00:13:02.422 "seek_hole": false, 00:13:02.422 "seek_data": false, 00:13:02.422 "copy": true, 00:13:02.422 "nvme_iov_md": false 00:13:02.422 }, 00:13:02.422 "memory_domains": [ 00:13:02.422 { 00:13:02.422 "dma_device_id": "system", 00:13:02.422 "dma_device_type": 1 00:13:02.422 }, 00:13:02.422 { 00:13:02.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.422 "dma_device_type": 2 00:13:02.422 } 00:13:02.422 ], 00:13:02.422 "driver_specific": {} 00:13:02.422 } 00:13:02.422 ] 00:13:02.422 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.422 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.423 Null_1 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.423 [ 00:13:02.423 { 00:13:02.423 "name": "Null_1", 00:13:02.423 "aliases": [ 00:13:02.423 "93911eb2-ae10-486f-8f9a-6e95157936c3" 00:13:02.423 ], 00:13:02.423 "product_name": "Null disk", 00:13:02.423 "block_size": 512, 00:13:02.423 "num_blocks": 262144, 00:13:02.423 "uuid": "93911eb2-ae10-486f-8f9a-6e95157936c3", 00:13:02.423 "assigned_rate_limits": { 00:13:02.423 "rw_ios_per_sec": 0, 00:13:02.423 "rw_mbytes_per_sec": 0, 00:13:02.423 "r_mbytes_per_sec": 0, 00:13:02.423 "w_mbytes_per_sec": 0 00:13:02.423 }, 00:13:02.423 "claimed": false, 00:13:02.423 "zoned": false, 00:13:02.423 "supported_io_types": { 00:13:02.423 "read": true, 00:13:02.423 "write": true, 00:13:02.423 "unmap": false, 00:13:02.423 "flush": false, 00:13:02.423 "reset": true, 00:13:02.423 "nvme_admin": false, 00:13:02.423 "nvme_io": false, 00:13:02.423 "nvme_io_md": false, 00:13:02.423 "write_zeroes": true, 00:13:02.423 "zcopy": false, 00:13:02.423 "get_zone_info": false, 00:13:02.423 "zone_management": false, 00:13:02.423 "zone_append": false, 00:13:02.423 "compare": false, 00:13:02.423 "compare_and_write": false, 00:13:02.423 "abort": true, 00:13:02.423 "seek_hole": false, 00:13:02.423 "seek_data": false, 00:13:02.423 "copy": false, 00:13:02.423 "nvme_iov_md": false 00:13:02.423 }, 00:13:02.423 "driver_specific": {} 00:13:02.423 } 00:13:02.423 ] 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:13:02.423 02:18:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:02.682 Running I/O for 60 seconds... 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 48965.29 195861.18 0.00 0.00 197632.00 0.00 0.00 ' 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=48965.29 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 48965 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=48965 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=12000 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 12000 -gt 1000 ']' 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 12000 Malloc_0 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 12000 IOPS Malloc_0 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.970 02:18:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:07.970 ************************************ 00:13:07.970 START TEST bdev_qos_iops 00:13:07.970 ************************************ 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 12000 IOPS Malloc_0 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=12000 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:07.970 02:18:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 11999.21 47996.85 0.00 0.00 49488.00 0.00 0.00 ' 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=11999.21 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 11999 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=11999 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=10800 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=13200 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 11999 -lt 10800 ']' 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 11999 -gt 13200 ']' 00:13:13.247 00:13:13.247 real 0m5.291s 00:13:13.247 user 0m0.125s 00:13:13.247 sys 0m0.045s 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.247 02:19:03 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:13:13.247 ************************************ 00:13:13.247 END TEST bdev_qos_iops 00:13:13.247 ************************************ 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:13:13.247 02:19:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 15392.24 61568.97 0.00 0.00 63488.00 0.00 0.00 ' 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=63488.00 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 63488 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=63488 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=6 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 6 -lt 2 ']' 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 6 Null_1 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 6 BANDWIDTH Null_1 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.518 02:19:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:18.518 ************************************ 00:13:18.518 START TEST bdev_qos_bw 00:13:18.518 ************************************ 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 6 BANDWIDTH Null_1 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=6 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:13:18.518 02:19:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:13:23.879 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 1535.83 6143.32 0.00 0.00 6364.00 0.00 0.00 ' 00:13:23.879 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:23.879 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:23.879 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:23.879 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=6364.00 00:13:23.879 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 6364 00:13:23.879 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=6364 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=6144 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=5529 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=6758 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 6364 -lt 5529 ']' 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 6364 -gt 6758 ']' 00:13:23.880 00:13:23.880 real 0m5.337s 00:13:23.880 user 0m0.115s 00:13:23.880 sys 0m0.054s 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.880 02:19:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:13:23.880 ************************************ 00:13:23.880 END TEST bdev_qos_bw 00:13:23.880 ************************************ 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.880 02:19:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:23.880 ************************************ 00:13:23.880 START TEST bdev_qos_ro_bw 00:13:23.880 ************************************ 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:23.880 02:19:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.57 2050.30 0.00 0.00 2064.00 0.00 0.00 ' 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2064.00 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2064 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2064 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2064 -lt 1843 ']' 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2064 -gt 2252 ']' 00:13:29.157 00:13:29.157 real 0m5.195s 00:13:29.157 user 0m0.110s 00:13:29.157 sys 0m0.061s 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.157 02:19:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:29.157 ************************************ 00:13:29.157 END TEST bdev_qos_ro_bw 00:13:29.157 ************************************ 00:13:29.157 02:19:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:29.157 02:19:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:29.157 02:19:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.157 02:19:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:29.723 02:19:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.723 02:19:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:13:29.723 02:19:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.723 02:19:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:29.723 00:13:29.723 Latency(us) 00:13:29.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.723 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:29.723 Malloc_0 : 26.96 16312.66 63.72 0.00 0.00 15547.09 2607.19 503316.48 00:13:29.723 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:29.723 Null_1 : 27.16 15662.52 61.18 0.00 0.00 16288.77 1018.66 198773.54 00:13:29.723 =================================================================================================================== 00:13:29.723 Total : 31975.18 124.90 0.00 0.00 15911.76 1018.66 503316.48 00:13:29.723 0 00:13:29.723 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.723 02:19:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 1883195 00:13:29.723 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 1883195 ']' 00:13:29.723 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 1883195 00:13:29.723 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:13:29.723 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.723 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883195 00:13:29.981 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:29.981 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:29.981 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883195' 00:13:29.981 killing process with pid 1883195 00:13:29.981 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 1883195 00:13:29.981 Received shutdown signal, test time was about 27.228465 seconds 00:13:29.981 00:13:29.981 Latency(us) 00:13:29.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.981 =================================================================================================================== 00:13:29.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.981 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 1883195 00:13:30.240 02:19:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:13:30.240 00:13:30.240 real 0m29.179s 00:13:30.240 user 0m30.211s 00:13:30.240 sys 0m1.025s 00:13:30.240 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.240 02:19:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:30.240 ************************************ 00:13:30.240 END TEST bdev_qos 00:13:30.240 ************************************ 00:13:30.240 02:19:20 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:30.240 02:19:20 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:30.240 02:19:20 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.240 02:19:20 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.240 02:19:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.240 ************************************ 00:13:30.240 START TEST bdev_qd_sampling 00:13:30.240 ************************************ 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=1887075 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 1887075' 00:13:30.240 Process bdev QD sampling period testing pid: 1887075 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 1887075 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 1887075 ']' 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.240 02:19:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:30.240 [2024-07-11 02:19:20.565662] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:30.240 [2024-07-11 02:19:20.565727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887075 ] 00:13:30.499 [2024-07-11 02:19:20.704364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.499 [2024-07-11 02:19:20.756031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.499 [2024-07-11 02:19:20.756036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.435 Malloc_QD 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.435 [ 00:13:31.435 { 00:13:31.435 "name": "Malloc_QD", 00:13:31.435 "aliases": [ 00:13:31.435 "995e2bbf-1130-4c30-bb00-348ddaaf0e2f" 00:13:31.435 ], 00:13:31.435 "product_name": "Malloc disk", 00:13:31.435 "block_size": 512, 00:13:31.435 "num_blocks": 262144, 00:13:31.435 "uuid": "995e2bbf-1130-4c30-bb00-348ddaaf0e2f", 00:13:31.435 "assigned_rate_limits": { 00:13:31.435 "rw_ios_per_sec": 0, 00:13:31.435 "rw_mbytes_per_sec": 0, 00:13:31.435 "r_mbytes_per_sec": 0, 00:13:31.435 "w_mbytes_per_sec": 0 00:13:31.435 }, 00:13:31.435 "claimed": false, 00:13:31.435 "zoned": false, 00:13:31.435 "supported_io_types": { 00:13:31.435 "read": true, 00:13:31.435 "write": true, 00:13:31.435 "unmap": true, 00:13:31.435 "flush": true, 00:13:31.435 "reset": true, 00:13:31.435 "nvme_admin": false, 00:13:31.435 "nvme_io": false, 00:13:31.435 "nvme_io_md": false, 00:13:31.435 "write_zeroes": true, 00:13:31.435 "zcopy": true, 00:13:31.435 "get_zone_info": false, 00:13:31.435 "zone_management": false, 00:13:31.435 "zone_append": false, 00:13:31.435 "compare": false, 00:13:31.435 "compare_and_write": false, 00:13:31.435 "abort": true, 00:13:31.435 "seek_hole": false, 00:13:31.435 "seek_data": false, 00:13:31.435 "copy": true, 00:13:31.435 "nvme_iov_md": false 00:13:31.435 }, 00:13:31.435 "memory_domains": [ 00:13:31.435 { 00:13:31.435 "dma_device_id": "system", 00:13:31.435 "dma_device_type": 1 00:13:31.435 }, 00:13:31.435 { 00:13:31.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.435 "dma_device_type": 2 00:13:31.435 } 00:13:31.435 ], 00:13:31.435 "driver_specific": {} 00:13:31.435 } 00:13:31.435 ] 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:13:31.435 02:19:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.435 Running I/O for 5 seconds... 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:13:33.341 "tick_rate": 2300000000, 00:13:33.341 "ticks": 10252706834303404, 00:13:33.341 "bdevs": [ 00:13:33.341 { 00:13:33.341 "name": "Malloc_QD", 00:13:33.341 "bytes_read": 710980096, 00:13:33.341 "num_read_ops": 173572, 00:13:33.341 "bytes_written": 0, 00:13:33.341 "num_write_ops": 0, 00:13:33.341 "bytes_unmapped": 0, 00:13:33.341 "num_unmap_ops": 0, 00:13:33.341 "bytes_copied": 0, 00:13:33.341 "num_copy_ops": 0, 00:13:33.341 "read_latency_ticks": 2237413905924, 00:13:33.341 "max_read_latency_ticks": 17249508, 00:13:33.341 "min_read_latency_ticks": 246480, 00:13:33.341 "write_latency_ticks": 0, 00:13:33.341 "max_write_latency_ticks": 0, 00:13:33.341 "min_write_latency_ticks": 0, 00:13:33.341 "unmap_latency_ticks": 0, 00:13:33.341 "max_unmap_latency_ticks": 0, 00:13:33.341 "min_unmap_latency_ticks": 0, 00:13:33.341 "copy_latency_ticks": 0, 00:13:33.341 "max_copy_latency_ticks": 0, 00:13:33.341 "min_copy_latency_ticks": 0, 00:13:33.341 "io_error": {}, 00:13:33.341 "queue_depth_polling_period": 10, 00:13:33.341 "queue_depth": 512, 00:13:33.341 "io_time": 30, 00:13:33.341 "weighted_io_time": 15360 00:13:33.341 } 00:13:33.341 ] 00:13:33.341 }' 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.341 00:13:33.341 Latency(us) 00:13:33.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.341 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:33.341 Malloc_QD : 1.98 50619.15 197.73 0.00 0.00 5045.07 1374.83 5584.81 00:13:33.341 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:33.341 Malloc_QD : 1.98 40639.24 158.75 0.00 0.00 6282.81 1210.99 7522.39 00:13:33.341 =================================================================================================================== 00:13:33.341 Total : 91258.39 356.48 0.00 0.00 5596.54 1210.99 7522.39 00:13:33.341 0 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 1887075 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 1887075 ']' 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 1887075 00:13:33.341 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:13:33.342 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.342 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887075 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887075' 00:13:33.601 killing process with pid 1887075 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 1887075 00:13:33.601 Received shutdown signal, test time was about 2.061062 seconds 00:13:33.601 00:13:33.601 Latency(us) 00:13:33.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.601 =================================================================================================================== 00:13:33.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 1887075 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:13:33.601 00:13:33.601 real 0m3.452s 00:13:33.601 user 0m6.840s 00:13:33.601 sys 0m0.461s 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.601 02:19:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.601 ************************************ 00:13:33.601 END TEST bdev_qd_sampling 00:13:33.601 ************************************ 00:13:33.601 02:19:24 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:33.601 02:19:24 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:13:33.601 02:19:24 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:33.601 02:19:24 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.601 02:19:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:33.860 ************************************ 00:13:33.860 START TEST bdev_error 00:13:33.860 ************************************ 00:13:33.860 02:19:24 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:13:33.860 02:19:24 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:13:33.860 02:19:24 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:13:33.860 02:19:24 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:13:33.860 02:19:24 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=1887535 00:13:33.860 02:19:24 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 1887535' 00:13:33.860 Process error testing pid: 1887535 00:13:33.860 02:19:24 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:33.860 02:19:24 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 1887535 00:13:33.860 02:19:24 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 1887535 ']' 00:13:33.860 02:19:24 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.860 02:19:24 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.860 02:19:24 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.860 02:19:24 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.860 02:19:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:33.860 [2024-07-11 02:19:24.099943] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:33.860 [2024-07-11 02:19:24.100006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887535 ] 00:13:33.860 [2024-07-11 02:19:24.243556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.119 [2024-07-11 02:19:24.301709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:35.058 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 Dev_1 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.058 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.058 [ 00:13:35.058 { 00:13:35.058 "name": "Dev_1", 00:13:35.058 "aliases": [ 00:13:35.058 "7901ce90-0934-474b-8c66-160d5d56abf9" 00:13:35.058 ], 00:13:35.058 "product_name": "Malloc disk", 00:13:35.058 "block_size": 512, 00:13:35.058 "num_blocks": 262144, 00:13:35.058 "uuid": "7901ce90-0934-474b-8c66-160d5d56abf9", 00:13:35.058 "assigned_rate_limits": { 00:13:35.058 "rw_ios_per_sec": 0, 00:13:35.058 "rw_mbytes_per_sec": 0, 00:13:35.058 "r_mbytes_per_sec": 0, 00:13:35.058 "w_mbytes_per_sec": 0 00:13:35.058 }, 00:13:35.058 "claimed": false, 00:13:35.058 "zoned": false, 00:13:35.058 "supported_io_types": { 00:13:35.058 "read": true, 00:13:35.058 "write": true, 00:13:35.058 "unmap": true, 00:13:35.058 "flush": true, 00:13:35.058 "reset": true, 00:13:35.058 "nvme_admin": false, 00:13:35.058 "nvme_io": false, 00:13:35.058 "nvme_io_md": false, 00:13:35.058 "write_zeroes": true, 00:13:35.058 "zcopy": true, 00:13:35.058 "get_zone_info": false, 00:13:35.058 "zone_management": false, 00:13:35.058 "zone_append": false, 00:13:35.058 "compare": false, 00:13:35.058 "compare_and_write": false, 00:13:35.058 "abort": true, 00:13:35.058 "seek_hole": false, 00:13:35.058 "seek_data": false, 00:13:35.058 "copy": true, 00:13:35.058 "nvme_iov_md": false 00:13:35.058 }, 00:13:35.058 "memory_domains": [ 00:13:35.058 { 00:13:35.058 "dma_device_id": "system", 00:13:35.058 "dma_device_type": 1 00:13:35.058 }, 00:13:35.058 { 00:13:35.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.058 "dma_device_type": 2 00:13:35.058 } 00:13:35.058 ], 00:13:35.058 "driver_specific": {} 00:13:35.058 } 00:13:35.058 ] 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.058 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:35.059 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 true 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.059 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 Dev_2 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.059 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.059 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 [ 00:13:35.059 { 00:13:35.059 "name": "Dev_2", 00:13:35.059 "aliases": [ 00:13:35.059 "5be03d65-6e61-479e-9f37-9b959fcd89c4" 00:13:35.059 ], 00:13:35.059 "product_name": "Malloc disk", 00:13:35.059 "block_size": 512, 00:13:35.059 "num_blocks": 262144, 00:13:35.059 "uuid": "5be03d65-6e61-479e-9f37-9b959fcd89c4", 00:13:35.059 "assigned_rate_limits": { 00:13:35.059 "rw_ios_per_sec": 0, 00:13:35.059 "rw_mbytes_per_sec": 0, 00:13:35.059 "r_mbytes_per_sec": 0, 00:13:35.059 "w_mbytes_per_sec": 0 00:13:35.059 }, 00:13:35.059 "claimed": false, 00:13:35.059 "zoned": false, 00:13:35.059 "supported_io_types": { 00:13:35.059 "read": true, 00:13:35.059 "write": true, 00:13:35.059 "unmap": true, 00:13:35.059 "flush": true, 00:13:35.059 "reset": true, 00:13:35.059 "nvme_admin": false, 00:13:35.059 "nvme_io": false, 00:13:35.059 "nvme_io_md": false, 00:13:35.059 "write_zeroes": true, 00:13:35.059 "zcopy": true, 00:13:35.059 "get_zone_info": false, 00:13:35.059 "zone_management": false, 00:13:35.059 "zone_append": false, 00:13:35.059 "compare": false, 00:13:35.059 "compare_and_write": false, 00:13:35.059 "abort": true, 00:13:35.059 "seek_hole": false, 00:13:35.059 "seek_data": false, 00:13:35.059 "copy": true, 00:13:35.059 "nvme_iov_md": false 00:13:35.059 }, 00:13:35.059 "memory_domains": [ 00:13:35.059 { 00:13:35.059 "dma_device_id": "system", 00:13:35.059 "dma_device_type": 1 00:13:35.059 }, 00:13:35.059 { 00:13:35.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.059 "dma_device_type": 2 00:13:35.059 } 00:13:35.317 ], 00:13:35.317 "driver_specific": {} 00:13:35.317 } 00:13:35.317 ] 00:13:35.317 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.317 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:35.317 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:35.317 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.317 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.317 02:19:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.318 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:13:35.318 02:19:25 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:35.318 Running I/O for 5 seconds... 00:13:36.257 02:19:26 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 1887535 00:13:36.257 02:19:26 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 1887535' 00:13:36.257 Process is existed as continue on error is set. Pid: 1887535 00:13:36.257 02:19:26 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:36.257 02:19:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.257 02:19:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:36.257 02:19:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.257 02:19:26 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:36.257 02:19:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.257 02:19:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:36.257 02:19:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.257 02:19:26 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:13:36.517 Timeout while waiting for response: 00:13:36.517 00:13:36.517 00:13:40.711 00:13:40.711 Latency(us) 00:13:40.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.711 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:40.711 EE_Dev_1 : 0.78 29223.29 114.15 6.38 0.00 542.64 166.51 865.50 00:13:40.711 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:40.711 Dev_2 : 5.00 64259.01 251.01 0.00 0.00 244.48 97.95 30773.43 00:13:40.711 =================================================================================================================== 00:13:40.711 Total : 93482.30 365.17 6.38 0.00 264.32 97.95 30773.43 00:13:41.281 02:19:31 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 1887535 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 1887535 ']' 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 1887535 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887535 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887535' 00:13:41.281 killing process with pid 1887535 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 1887535 00:13:41.281 Received shutdown signal, test time was about 5.000000 seconds 00:13:41.281 00:13:41.281 Latency(us) 00:13:41.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.281 =================================================================================================================== 00:13:41.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.281 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 1887535 00:13:41.541 02:19:31 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=1888591 00:13:41.541 02:19:31 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 1888591' 00:13:41.541 Process error testing pid: 1888591 00:13:41.541 02:19:31 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:41.541 02:19:31 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 1888591 00:13:41.541 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 1888591 ']' 00:13:41.541 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.541 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.541 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.541 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.541 02:19:31 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:41.801 [2024-07-11 02:19:31.981278] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:41.801 [2024-07-11 02:19:31.981357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888591 ] 00:13:41.801 [2024-07-11 02:19:32.129892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.801 [2024-07-11 02:19:32.196384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:42.740 02:19:32 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.740 Dev_1 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.740 02:19:32 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.740 02:19:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.740 [ 00:13:42.740 { 00:13:42.740 "name": "Dev_1", 00:13:42.740 "aliases": [ 00:13:42.740 "dee57ad1-1b4f-4c7c-b750-df173c80c049" 00:13:42.740 ], 00:13:42.740 "product_name": "Malloc disk", 00:13:42.740 "block_size": 512, 00:13:42.740 "num_blocks": 262144, 00:13:42.740 "uuid": "dee57ad1-1b4f-4c7c-b750-df173c80c049", 00:13:42.740 "assigned_rate_limits": { 00:13:42.740 "rw_ios_per_sec": 0, 00:13:42.740 "rw_mbytes_per_sec": 0, 00:13:42.740 "r_mbytes_per_sec": 0, 00:13:42.740 "w_mbytes_per_sec": 0 00:13:42.740 }, 00:13:42.740 "claimed": false, 00:13:42.740 "zoned": false, 00:13:42.740 "supported_io_types": { 00:13:42.740 "read": true, 00:13:42.740 "write": true, 00:13:42.740 "unmap": true, 00:13:42.740 "flush": true, 00:13:42.740 "reset": true, 00:13:42.740 "nvme_admin": false, 00:13:42.740 "nvme_io": false, 00:13:42.740 "nvme_io_md": false, 00:13:42.740 "write_zeroes": true, 00:13:42.740 "zcopy": true, 00:13:42.740 "get_zone_info": false, 00:13:42.740 "zone_management": false, 00:13:42.740 "zone_append": false, 00:13:42.740 "compare": false, 00:13:42.740 "compare_and_write": false, 00:13:42.740 "abort": true, 00:13:42.740 "seek_hole": false, 00:13:42.740 "seek_data": false, 00:13:42.740 "copy": true, 00:13:42.740 "nvme_iov_md": false 00:13:42.740 }, 00:13:42.740 "memory_domains": [ 00:13:42.740 { 00:13:42.740 "dma_device_id": "system", 00:13:42.740 "dma_device_type": 1 00:13:42.740 }, 00:13:42.740 { 00:13:42.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.740 "dma_device_type": 2 00:13:42.740 } 00:13:42.740 ], 00:13:42.740 "driver_specific": {} 00:13:42.740 } 00:13:42.740 ] 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:42.740 02:19:33 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.740 true 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.740 02:19:33 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.740 Dev_2 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.740 02:19:33 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.740 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.740 [ 00:13:42.740 { 00:13:42.740 "name": "Dev_2", 00:13:42.740 "aliases": [ 00:13:42.740 "66925d51-b306-4833-86a5-ef7c45099a7e" 00:13:42.740 ], 00:13:42.740 "product_name": "Malloc disk", 00:13:42.740 "block_size": 512, 00:13:42.740 "num_blocks": 262144, 00:13:42.741 "uuid": "66925d51-b306-4833-86a5-ef7c45099a7e", 00:13:42.741 "assigned_rate_limits": { 00:13:42.741 "rw_ios_per_sec": 0, 00:13:42.741 "rw_mbytes_per_sec": 0, 00:13:42.741 "r_mbytes_per_sec": 0, 00:13:42.741 "w_mbytes_per_sec": 0 00:13:42.741 }, 00:13:42.741 "claimed": false, 00:13:42.741 "zoned": false, 00:13:42.741 "supported_io_types": { 00:13:42.741 "read": true, 00:13:42.741 "write": true, 00:13:42.741 "unmap": true, 00:13:42.741 "flush": true, 00:13:42.741 "reset": true, 00:13:42.741 "nvme_admin": false, 00:13:42.741 "nvme_io": false, 00:13:42.741 "nvme_io_md": false, 00:13:42.741 "write_zeroes": true, 00:13:42.741 "zcopy": true, 00:13:42.741 "get_zone_info": false, 00:13:42.741 "zone_management": false, 00:13:42.741 "zone_append": false, 00:13:42.741 "compare": false, 00:13:42.741 "compare_and_write": false, 00:13:42.741 "abort": true, 00:13:42.741 "seek_hole": false, 00:13:42.741 "seek_data": false, 00:13:42.741 "copy": true, 00:13:42.741 "nvme_iov_md": false 00:13:42.741 }, 00:13:42.741 "memory_domains": [ 00:13:42.741 { 00:13:42.741 "dma_device_id": "system", 00:13:42.741 "dma_device_type": 1 00:13:42.741 }, 00:13:42.741 { 00:13:42.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.741 "dma_device_type": 2 00:13:42.741 } 00:13:42.741 ], 00:13:42.741 "driver_specific": {} 00:13:42.741 } 00:13:42.741 ] 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:42.741 02:19:33 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.741 02:19:33 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 1888591 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:13:42.741 02:19:33 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1888591 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.741 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 1888591 00:13:43.001 Running I/O for 5 seconds... 00:13:43.001 task offset: 66608 on job bdev=EE_Dev_1 fails 00:13:43.001 00:13:43.001 Latency(us) 00:13:43.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.001 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:43.001 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:43.001 EE_Dev_1 : 0.00 23655.91 92.41 5376.34 0.00 457.62 165.62 815.64 00:13:43.001 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:43.001 Dev_2 : 0.00 14427.41 56.36 0.00 0.00 823.87 158.50 1531.55 00:13:43.001 =================================================================================================================== 00:13:43.001 Total : 38083.33 148.76 5376.34 0.00 656.27 158.50 1531.55 00:13:43.001 [2024-07-11 02:19:33.337010] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.001 request: 00:13:43.001 { 00:13:43.001 "method": "perform_tests", 00:13:43.001 "req_id": 1 00:13:43.001 } 00:13:43.001 Got JSON-RPC error response 00:13:43.001 response: 00:13:43.001 { 00:13:43.001 "code": -32603, 00:13:43.001 "message": "bdevperf failed with error Operation not permitted" 00:13:43.001 } 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.574 00:13:43.574 real 0m9.662s 00:13:43.574 user 0m10.420s 00:13:43.574 sys 0m1.063s 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.574 02:19:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:43.574 ************************************ 00:13:43.574 END TEST bdev_error 00:13:43.574 ************************************ 00:13:43.574 02:19:33 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:43.574 02:19:33 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:13:43.574 02:19:33 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.574 02:19:33 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.574 02:19:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:43.574 ************************************ 00:13:43.574 START TEST bdev_stat 00:13:43.574 ************************************ 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=1888798 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 1888798' 00:13:43.574 Process Bdev IO statistics testing pid: 1888798 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 1888798 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 1888798 ']' 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.574 02:19:33 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:43.574 [2024-07-11 02:19:33.851308] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:43.574 [2024-07-11 02:19:33.851376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888798 ] 00:13:43.574 [2024-07-11 02:19:33.993113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.835 [2024-07-11 02:19:34.046779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.835 [2024-07-11 02:19:34.046784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.402 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.402 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:13:44.402 02:19:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:44.402 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.402 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 Malloc_STAT 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:44.662 [ 00:13:44.662 { 00:13:44.662 "name": "Malloc_STAT", 00:13:44.662 "aliases": [ 00:13:44.662 "a50819e7-78a4-40ec-a231-695526e0bff6" 00:13:44.662 ], 00:13:44.662 "product_name": "Malloc disk", 00:13:44.662 "block_size": 512, 00:13:44.662 "num_blocks": 262144, 00:13:44.662 "uuid": "a50819e7-78a4-40ec-a231-695526e0bff6", 00:13:44.662 "assigned_rate_limits": { 00:13:44.662 "rw_ios_per_sec": 0, 00:13:44.662 "rw_mbytes_per_sec": 0, 00:13:44.662 "r_mbytes_per_sec": 0, 00:13:44.662 "w_mbytes_per_sec": 0 00:13:44.662 }, 00:13:44.662 "claimed": false, 00:13:44.662 "zoned": false, 00:13:44.662 "supported_io_types": { 00:13:44.662 "read": true, 00:13:44.662 "write": true, 00:13:44.662 "unmap": true, 00:13:44.662 "flush": true, 00:13:44.662 "reset": true, 00:13:44.662 "nvme_admin": false, 00:13:44.662 "nvme_io": false, 00:13:44.662 "nvme_io_md": false, 00:13:44.662 "write_zeroes": true, 00:13:44.662 "zcopy": true, 00:13:44.662 "get_zone_info": false, 00:13:44.662 "zone_management": false, 00:13:44.662 "zone_append": false, 00:13:44.662 "compare": false, 00:13:44.662 "compare_and_write": false, 00:13:44.662 "abort": true, 00:13:44.662 "seek_hole": false, 00:13:44.662 "seek_data": false, 00:13:44.662 "copy": true, 00:13:44.662 "nvme_iov_md": false 00:13:44.662 }, 00:13:44.662 "memory_domains": [ 00:13:44.662 { 00:13:44.662 "dma_device_id": "system", 00:13:44.662 "dma_device_type": 1 00:13:44.662 }, 00:13:44.662 { 00:13:44.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.662 "dma_device_type": 2 00:13:44.662 } 00:13:44.662 ], 00:13:44.662 "driver_specific": {} 00:13:44.662 } 00:13:44.662 ] 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:13:44.662 02:19:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:44.662 Running I/O for 10 seconds... 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:13:46.570 "tick_rate": 2300000000, 00:13:46.570 "ticks": 10252737278984202, 00:13:46.570 "bdevs": [ 00:13:46.570 { 00:13:46.570 "name": "Malloc_STAT", 00:13:46.570 "bytes_read": 712028672, 00:13:46.570 "num_read_ops": 173828, 00:13:46.570 "bytes_written": 0, 00:13:46.570 "num_write_ops": 0, 00:13:46.570 "bytes_unmapped": 0, 00:13:46.570 "num_unmap_ops": 0, 00:13:46.570 "bytes_copied": 0, 00:13:46.570 "num_copy_ops": 0, 00:13:46.570 "read_latency_ticks": 2228742430814, 00:13:46.570 "max_read_latency_ticks": 17714866, 00:13:46.570 "min_read_latency_ticks": 256012, 00:13:46.570 "write_latency_ticks": 0, 00:13:46.570 "max_write_latency_ticks": 0, 00:13:46.570 "min_write_latency_ticks": 0, 00:13:46.570 "unmap_latency_ticks": 0, 00:13:46.570 "max_unmap_latency_ticks": 0, 00:13:46.570 "min_unmap_latency_ticks": 0, 00:13:46.570 "copy_latency_ticks": 0, 00:13:46.570 "max_copy_latency_ticks": 0, 00:13:46.570 "min_copy_latency_ticks": 0, 00:13:46.570 "io_error": {} 00:13:46.570 } 00:13:46.570 ] 00:13:46.570 }' 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=173828 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:13:46.570 "tick_rate": 2300000000, 00:13:46.570 "ticks": 10252737441466004, 00:13:46.570 "name": "Malloc_STAT", 00:13:46.570 "channels": [ 00:13:46.570 { 00:13:46.570 "thread_id": 2, 00:13:46.570 "bytes_read": 408944640, 00:13:46.570 "num_read_ops": 99840, 00:13:46.570 "bytes_written": 0, 00:13:46.570 "num_write_ops": 0, 00:13:46.570 "bytes_unmapped": 0, 00:13:46.570 "num_unmap_ops": 0, 00:13:46.570 "bytes_copied": 0, 00:13:46.570 "num_copy_ops": 0, 00:13:46.570 "read_latency_ticks": 1155522149560, 00:13:46.570 "max_read_latency_ticks": 12714928, 00:13:46.570 "min_read_latency_ticks": 8186494, 00:13:46.570 "write_latency_ticks": 0, 00:13:46.570 "max_write_latency_ticks": 0, 00:13:46.570 "min_write_latency_ticks": 0, 00:13:46.570 "unmap_latency_ticks": 0, 00:13:46.570 "max_unmap_latency_ticks": 0, 00:13:46.570 "min_unmap_latency_ticks": 0, 00:13:46.570 "copy_latency_ticks": 0, 00:13:46.570 "max_copy_latency_ticks": 0, 00:13:46.570 "min_copy_latency_ticks": 0 00:13:46.570 }, 00:13:46.570 { 00:13:46.570 "thread_id": 3, 00:13:46.570 "bytes_read": 329252864, 00:13:46.570 "num_read_ops": 80384, 00:13:46.570 "bytes_written": 0, 00:13:46.570 "num_write_ops": 0, 00:13:46.570 "bytes_unmapped": 0, 00:13:46.570 "num_unmap_ops": 0, 00:13:46.570 "bytes_copied": 0, 00:13:46.570 "num_copy_ops": 0, 00:13:46.570 "read_latency_ticks": 1155385015684, 00:13:46.570 "max_read_latency_ticks": 17714866, 00:13:46.570 "min_read_latency_ticks": 9321222, 00:13:46.570 "write_latency_ticks": 0, 00:13:46.570 "max_write_latency_ticks": 0, 00:13:46.570 "min_write_latency_ticks": 0, 00:13:46.570 "unmap_latency_ticks": 0, 00:13:46.570 "max_unmap_latency_ticks": 0, 00:13:46.570 "min_unmap_latency_ticks": 0, 00:13:46.570 "copy_latency_ticks": 0, 00:13:46.570 "max_copy_latency_ticks": 0, 00:13:46.570 "min_copy_latency_ticks": 0 00:13:46.570 } 00:13:46.570 ] 00:13:46.570 }' 00:13:46.570 02:19:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=99840 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=99840 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=80384 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=180224 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:13:46.830 "tick_rate": 2300000000, 00:13:46.830 "ticks": 10252737721698166, 00:13:46.830 "bdevs": [ 00:13:46.830 { 00:13:46.830 "name": "Malloc_STAT", 00:13:46.830 "bytes_read": 784380416, 00:13:46.830 "num_read_ops": 191492, 00:13:46.830 "bytes_written": 0, 00:13:46.830 "num_write_ops": 0, 00:13:46.830 "bytes_unmapped": 0, 00:13:46.830 "num_unmap_ops": 0, 00:13:46.830 "bytes_copied": 0, 00:13:46.830 "num_copy_ops": 0, 00:13:46.830 "read_latency_ticks": 2456114869910, 00:13:46.830 "max_read_latency_ticks": 17714866, 00:13:46.830 "min_read_latency_ticks": 256012, 00:13:46.830 "write_latency_ticks": 0, 00:13:46.830 "max_write_latency_ticks": 0, 00:13:46.830 "min_write_latency_ticks": 0, 00:13:46.830 "unmap_latency_ticks": 0, 00:13:46.830 "max_unmap_latency_ticks": 0, 00:13:46.830 "min_unmap_latency_ticks": 0, 00:13:46.830 "copy_latency_ticks": 0, 00:13:46.830 "max_copy_latency_ticks": 0, 00:13:46.830 "min_copy_latency_ticks": 0, 00:13:46.830 "io_error": {} 00:13:46.830 } 00:13:46.830 ] 00:13:46.830 }' 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=191492 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 180224 -lt 173828 ']' 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 180224 -gt 191492 ']' 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:46.830 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:46.831 00:13:46.831 Latency(us) 00:13:46.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.831 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:46.831 Malloc_STAT : 2.16 50768.27 198.31 0.00 0.00 5030.69 1417.57 5556.31 00:13:46.831 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:46.831 Malloc_STAT : 2.17 40910.87 159.81 0.00 0.00 6241.93 1210.99 7750.34 00:13:46.831 =================================================================================================================== 00:13:46.831 Total : 91679.14 358.12 0.00 0.00 5571.45 1210.99 7750.34 00:13:46.831 0 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 1888798 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 1888798 ']' 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 1888798 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1888798 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1888798' 00:13:46.831 killing process with pid 1888798 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 1888798 00:13:46.831 Received shutdown signal, test time was about 2.246757 seconds 00:13:46.831 00:13:46.831 Latency(us) 00:13:46.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.831 =================================================================================================================== 00:13:46.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:46.831 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 1888798 00:13:47.089 02:19:37 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:13:47.089 00:13:47.089 real 0m3.652s 00:13:47.089 user 0m7.339s 00:13:47.089 sys 0m0.498s 00:13:47.089 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.089 02:19:37 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:47.089 ************************************ 00:13:47.089 END TEST bdev_stat 00:13:47.089 ************************************ 00:13:47.089 02:19:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:47.089 02:19:37 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:13:47.089 02:19:37 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:13:47.089 02:19:37 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:13:47.089 02:19:37 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:13:47.090 02:19:37 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:13:47.090 02:19:37 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:13:47.090 02:19:37 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:13:47.090 02:19:37 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:13:47.090 02:19:37 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:13:47.090 02:19:37 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:13:47.090 00:13:47.090 real 1m58.848s 00:13:47.090 user 7m14.862s 00:13:47.090 sys 0m24.061s 00:13:47.090 02:19:37 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.090 02:19:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:47.090 ************************************ 00:13:47.090 END TEST blockdev_general 00:13:47.090 ************************************ 00:13:47.349 02:19:37 -- common/autotest_common.sh@1142 -- # return 0 00:13:47.349 02:19:37 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:47.349 02:19:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:47.349 02:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.349 02:19:37 -- common/autotest_common.sh@10 -- # set +x 00:13:47.349 ************************************ 00:13:47.349 START TEST bdev_raid 00:13:47.349 ************************************ 00:13:47.349 02:19:37 bdev_raid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:47.349 * Looking for test storage... 00:13:47.349 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:13:47.349 02:19:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:13:47.349 02:19:37 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:47.349 02:19:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:47.349 02:19:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.349 02:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.608 ************************************ 00:13:47.608 START TEST raid_function_test_raid0 00:13:47.608 ************************************ 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=1889415 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1889415' 00:13:47.608 Process raid pid: 1889415 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 1889415 /var/tmp/spdk-raid.sock 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 1889415 ']' 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:47.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.608 02:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:47.608 [2024-07-11 02:19:37.836207] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:47.608 [2024-07-11 02:19:37.836275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.608 [2024-07-11 02:19:37.963792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.608 [2024-07-11 02:19:38.015455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.867 [2024-07-11 02:19:38.075377] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.867 [2024-07-11 02:19:38.075401] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.126 02:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.126 02:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:13:48.126 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:13:48.126 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:13:48.126 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:48.126 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:13:48.126 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:48.385 [2024-07-11 02:19:38.566203] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:48.385 [2024-07-11 02:19:38.567608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:48.385 [2024-07-11 02:19:38.567663] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x204b320 00:13:48.385 [2024-07-11 02:19:38.567673] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:48.385 [2024-07-11 02:19:38.567857] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x204b740 00:13:48.385 [2024-07-11 02:19:38.567984] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x204b320 00:13:48.385 [2024-07-11 02:19:38.567994] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x204b320 00:13:48.385 [2024-07-11 02:19:38.568090] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.385 Base_1 00:13:48.385 Base_2 00:13:48.385 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:48.385 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:48.385 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.656 02:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:48.930 [2024-07-11 02:19:39.328325] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x204b740 00:13:48.930 /dev/nbd0 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.188 1+0 records in 00:13:49.188 1+0 records out 00:13:49.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268043 s, 15.3 MB/s 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.188 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:49.755 { 00:13:49.755 "nbd_device": "/dev/nbd0", 00:13:49.755 "bdev_name": "raid" 00:13:49.755 } 00:13:49.755 ]' 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:49.755 { 00:13:49.755 "nbd_device": "/dev/nbd0", 00:13:49.755 "bdev_name": "raid" 00:13:49.755 } 00:13:49.755 ]' 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:49.755 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:49.756 02:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:49.756 4096+0 records in 00:13:49.756 4096+0 records out 00:13:49.756 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0312048 s, 67.2 MB/s 00:13:49.756 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:50.014 4096+0 records in 00:13:50.014 4096+0 records out 00:13:50.014 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.304157 s, 6.9 MB/s 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:50.014 128+0 records in 00:13:50.014 128+0 records out 00:13:50.014 65536 bytes (66 kB, 64 KiB) copied, 0.000833771 s, 78.6 MB/s 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:50.014 2035+0 records in 00:13:50.014 2035+0 records out 00:13:50.014 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0101754 s, 102 MB/s 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:50.014 456+0 records in 00:13:50.014 456+0 records out 00:13:50.014 233472 bytes (233 kB, 228 KiB) copied, 0.00270993 s, 86.2 MB/s 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:50.014 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.015 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:50.274 [2024-07-11 02:19:40.691561] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:50.533 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:50.793 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:50.793 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:50.793 02:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 1889415 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 1889415 ']' 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 1889415 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889415 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889415' 00:13:50.793 killing process with pid 1889415 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 1889415 00:13:50.793 [2024-07-11 02:19:41.075093] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.793 [2024-07-11 02:19:41.075160] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.793 [2024-07-11 02:19:41.075201] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.793 [2024-07-11 02:19:41.075213] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x204b320 name raid, state offline 00:13:50.793 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 1889415 00:13:50.793 [2024-07-11 02:19:41.094142] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.053 02:19:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:13:51.053 00:13:51.053 real 0m3.519s 00:13:51.053 user 0m4.978s 00:13:51.053 sys 0m1.487s 00:13:51.053 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.053 02:19:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:51.053 ************************************ 00:13:51.053 END TEST raid_function_test_raid0 00:13:51.053 ************************************ 00:13:51.053 02:19:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:51.053 02:19:41 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:13:51.053 02:19:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:51.053 02:19:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.053 02:19:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.053 ************************************ 00:13:51.053 START TEST raid_function_test_concat 00:13:51.053 ************************************ 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=1890014 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1890014' 00:13:51.053 Process raid pid: 1890014 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 1890014 /var/tmp/spdk-raid.sock 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 1890014 ']' 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.053 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:51.053 [2024-07-11 02:19:41.440420] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:51.053 [2024-07-11 02:19:41.440494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.312 [2024-07-11 02:19:41.607467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.312 [2024-07-11 02:19:41.679324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.572 [2024-07-11 02:19:41.748661] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.572 [2024-07-11 02:19:41.748689] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.572 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.572 02:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:13:51.572 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:13:51.572 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:13:51.572 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:51.572 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:13:51.572 02:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:51.832 [2024-07-11 02:19:42.060892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:51.832 [2024-07-11 02:19:42.062271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:51.832 [2024-07-11 02:19:42.062327] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x284e320 00:13:51.832 [2024-07-11 02:19:42.062337] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:51.832 [2024-07-11 02:19:42.062516] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x284e740 00:13:51.832 [2024-07-11 02:19:42.062628] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x284e320 00:13:51.832 [2024-07-11 02:19:42.062638] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x284e320 00:13:51.832 [2024-07-11 02:19:42.062735] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.832 Base_1 00:13:51.832 Base_2 00:13:51.832 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:51.832 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:51.832 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.091 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:52.350 [2024-07-11 02:19:42.562241] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x284e740 00:13:52.350 /dev/nbd0 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.350 1+0 records in 00:13:52.350 1+0 records out 00:13:52.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260785 s, 15.7 MB/s 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.350 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:52.351 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:52.351 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:52.610 { 00:13:52.610 "nbd_device": "/dev/nbd0", 00:13:52.610 "bdev_name": "raid" 00:13:52.610 } 00:13:52.610 ]' 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:52.610 { 00:13:52.610 "nbd_device": "/dev/nbd0", 00:13:52.610 "bdev_name": "raid" 00:13:52.610 } 00:13:52.610 ]' 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:52.610 4096+0 records in 00:13:52.610 4096+0 records out 00:13:52.610 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0299533 s, 70.0 MB/s 00:13:52.610 02:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:52.900 4096+0 records in 00:13:52.900 4096+0 records out 00:13:52.900 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.280784 s, 7.5 MB/s 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:52.900 128+0 records in 00:13:52.900 128+0 records out 00:13:52.900 65536 bytes (66 kB, 64 KiB) copied, 0.000846594 s, 77.4 MB/s 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:52.900 2035+0 records in 00:13:52.900 2035+0 records out 00:13:52.900 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00549772 s, 190 MB/s 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:52.900 456+0 records in 00:13:52.900 456+0 records out 00:13:52.900 233472 bytes (233 kB, 228 KiB) copied, 0.00200443 s, 116 MB/s 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:52.900 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:53.224 [2024-07-11 02:19:43.601227] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:53.224 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:53.482 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:53.482 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:53.482 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 1890014 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 1890014 ']' 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 1890014 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1890014 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1890014' 00:13:53.741 killing process with pid 1890014 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 1890014 00:13:53.741 [2024-07-11 02:19:43.993317] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.741 [2024-07-11 02:19:43.993378] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.741 [2024-07-11 02:19:43.993415] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.741 [2024-07-11 02:19:43.993426] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x284e320 name raid, state offline 00:13:53.741 02:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 1890014 00:13:53.741 [2024-07-11 02:19:44.009942] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.000 02:19:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:13:54.000 00:13:54.000 real 0m2.825s 00:13:54.000 user 0m3.842s 00:13:54.000 sys 0m1.221s 00:13:54.000 02:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:54.000 02:19:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:54.000 ************************************ 00:13:54.000 END TEST raid_function_test_concat 00:13:54.000 ************************************ 00:13:54.000 02:19:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:54.000 02:19:44 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:13:54.000 02:19:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:54.000 02:19:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.000 02:19:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.000 ************************************ 00:13:54.000 START TEST raid0_resize_test 00:13:54.000 ************************************ 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=1890456 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 1890456' 00:13:54.000 Process raid pid: 1890456 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 1890456 /var/tmp/spdk-raid.sock 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 1890456 ']' 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:54.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.000 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.000 [2024-07-11 02:19:44.344455] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:54.001 [2024-07-11 02:19:44.344515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.263 [2024-07-11 02:19:44.472665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.263 [2024-07-11 02:19:44.525197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.263 [2024-07-11 02:19:44.589649] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.263 [2024-07-11 02:19:44.589673] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.522 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.523 02:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:13:54.523 02:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:54.783 Base_1 00:13:54.783 02:19:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:55.043 Base_2 00:13:55.043 02:19:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:55.303 [2024-07-11 02:19:45.545445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:55.303 [2024-07-11 02:19:45.546766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:55.303 [2024-07-11 02:19:45.546814] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2448ae0 00:13:55.303 [2024-07-11 02:19:45.546824] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:55.303 [2024-07-11 02:19:45.547036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22dca70 00:13:55.303 [2024-07-11 02:19:45.547127] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2448ae0 00:13:55.303 [2024-07-11 02:19:45.547136] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x2448ae0 00:13:55.303 [2024-07-11 02:19:45.547248] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.303 02:19:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:55.563 [2024-07-11 02:19:45.798092] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:55.563 [2024-07-11 02:19:45.798110] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:55.563 true 00:13:55.563 02:19:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:13:55.563 02:19:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:55.823 [2024-07-11 02:19:46.050917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.823 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:13:55.823 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:13:55.823 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:13:55.823 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:56.088 [2024-07-11 02:19:46.307419] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:56.088 [2024-07-11 02:19:46.307434] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:56.088 [2024-07-11 02:19:46.307458] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:56.088 true 00:13:56.088 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:56.088 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:13:56.349 [2024-07-11 02:19:46.564280] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 1890456 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 1890456 ']' 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 1890456 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1890456 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1890456' 00:13:56.349 killing process with pid 1890456 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 1890456 00:13:56.349 [2024-07-11 02:19:46.634256] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.349 [2024-07-11 02:19:46.634311] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.349 [2024-07-11 02:19:46.634349] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.349 [2024-07-11 02:19:46.634366] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2448ae0 name Raid, state offline 00:13:56.349 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 1890456 00:13:56.349 [2024-07-11 02:19:46.636198] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.610 02:19:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:13:56.610 00:13:56.610 real 0m2.625s 00:13:56.610 user 0m4.307s 00:13:56.610 sys 0m0.642s 00:13:56.610 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:56.610 02:19:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.610 ************************************ 00:13:56.610 END TEST raid0_resize_test 00:13:56.610 ************************************ 00:13:56.610 02:19:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:56.610 02:19:46 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:13:56.610 02:19:46 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:56.610 02:19:46 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:56.610 02:19:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:56.610 02:19:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.610 02:19:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.610 ************************************ 00:13:56.610 START TEST raid_state_function_test 00:13:56.610 ************************************ 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.610 02:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:56.610 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1890844 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1890844' 00:13:56.611 Process raid pid: 1890844 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1890844 /var/tmp/spdk-raid.sock 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1890844 ']' 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:56.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.611 02:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.871 [2024-07-11 02:19:47.066357] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:56.871 [2024-07-11 02:19:47.066424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.871 [2024-07-11 02:19:47.205328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.871 [2024-07-11 02:19:47.253556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.131 [2024-07-11 02:19:47.315973] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.131 [2024-07-11 02:19:47.316006] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.701 02:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.701 02:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:57.701 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:57.986 [2024-07-11 02:19:48.226556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.986 [2024-07-11 02:19:48.226600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.986 [2024-07-11 02:19:48.226611] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.986 [2024-07-11 02:19:48.226623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.986 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.245 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.245 "name": "Existed_Raid", 00:13:58.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.245 "strip_size_kb": 64, 00:13:58.245 "state": "configuring", 00:13:58.245 "raid_level": "raid0", 00:13:58.245 "superblock": false, 00:13:58.245 "num_base_bdevs": 2, 00:13:58.245 "num_base_bdevs_discovered": 0, 00:13:58.245 "num_base_bdevs_operational": 2, 00:13:58.245 "base_bdevs_list": [ 00:13:58.245 { 00:13:58.245 "name": "BaseBdev1", 00:13:58.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.245 "is_configured": false, 00:13:58.245 "data_offset": 0, 00:13:58.245 "data_size": 0 00:13:58.245 }, 00:13:58.245 { 00:13:58.245 "name": "BaseBdev2", 00:13:58.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.245 "is_configured": false, 00:13:58.245 "data_offset": 0, 00:13:58.245 "data_size": 0 00:13:58.245 } 00:13:58.245 ] 00:13:58.245 }' 00:13:58.245 02:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.245 02:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.815 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.076 [2024-07-11 02:19:49.309388] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.076 [2024-07-11 02:19:49.309418] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23fd730 name Existed_Raid, state configuring 00:13:59.076 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:59.337 [2024-07-11 02:19:49.558058] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.337 [2024-07-11 02:19:49.558084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.337 [2024-07-11 02:19:49.558094] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.337 [2024-07-11 02:19:49.558105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.337 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:59.597 [2024-07-11 02:19:49.812479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.597 BaseBdev1 00:13:59.597 02:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:59.597 02:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:59.597 02:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.597 02:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:59.597 02:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.597 02:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.597 02:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.857 02:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.118 [ 00:14:00.118 { 00:14:00.118 "name": "BaseBdev1", 00:14:00.118 "aliases": [ 00:14:00.118 "0b576c72-a1c4-4268-ac7d-bc1571d1f569" 00:14:00.118 ], 00:14:00.118 "product_name": "Malloc disk", 00:14:00.118 "block_size": 512, 00:14:00.118 "num_blocks": 65536, 00:14:00.118 "uuid": "0b576c72-a1c4-4268-ac7d-bc1571d1f569", 00:14:00.118 "assigned_rate_limits": { 00:14:00.118 "rw_ios_per_sec": 0, 00:14:00.118 "rw_mbytes_per_sec": 0, 00:14:00.118 "r_mbytes_per_sec": 0, 00:14:00.118 "w_mbytes_per_sec": 0 00:14:00.118 }, 00:14:00.118 "claimed": true, 00:14:00.118 "claim_type": "exclusive_write", 00:14:00.118 "zoned": false, 00:14:00.118 "supported_io_types": { 00:14:00.118 "read": true, 00:14:00.118 "write": true, 00:14:00.118 "unmap": true, 00:14:00.118 "flush": true, 00:14:00.118 "reset": true, 00:14:00.118 "nvme_admin": false, 00:14:00.118 "nvme_io": false, 00:14:00.118 "nvme_io_md": false, 00:14:00.118 "write_zeroes": true, 00:14:00.118 "zcopy": true, 00:14:00.118 "get_zone_info": false, 00:14:00.118 "zone_management": false, 00:14:00.118 "zone_append": false, 00:14:00.118 "compare": false, 00:14:00.118 "compare_and_write": false, 00:14:00.118 "abort": true, 00:14:00.118 "seek_hole": false, 00:14:00.118 "seek_data": false, 00:14:00.118 "copy": true, 00:14:00.118 "nvme_iov_md": false 00:14:00.118 }, 00:14:00.118 "memory_domains": [ 00:14:00.118 { 00:14:00.118 "dma_device_id": "system", 00:14:00.118 "dma_device_type": 1 00:14:00.118 }, 00:14:00.118 { 00:14:00.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.118 "dma_device_type": 2 00:14:00.118 } 00:14:00.118 ], 00:14:00.118 "driver_specific": {} 00:14:00.118 } 00:14:00.118 ] 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.118 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.379 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.379 "name": "Existed_Raid", 00:14:00.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.379 "strip_size_kb": 64, 00:14:00.379 "state": "configuring", 00:14:00.379 "raid_level": "raid0", 00:14:00.379 "superblock": false, 00:14:00.379 "num_base_bdevs": 2, 00:14:00.379 "num_base_bdevs_discovered": 1, 00:14:00.379 "num_base_bdevs_operational": 2, 00:14:00.379 "base_bdevs_list": [ 00:14:00.379 { 00:14:00.379 "name": "BaseBdev1", 00:14:00.379 "uuid": "0b576c72-a1c4-4268-ac7d-bc1571d1f569", 00:14:00.379 "is_configured": true, 00:14:00.379 "data_offset": 0, 00:14:00.379 "data_size": 65536 00:14:00.379 }, 00:14:00.379 { 00:14:00.379 "name": "BaseBdev2", 00:14:00.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.379 "is_configured": false, 00:14:00.379 "data_offset": 0, 00:14:00.379 "data_size": 0 00:14:00.379 } 00:14:00.379 ] 00:14:00.379 }' 00:14:00.379 02:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.379 02:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.949 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:01.209 [2024-07-11 02:19:51.428765] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.209 [2024-07-11 02:19:51.428810] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23fd060 name Existed_Raid, state configuring 00:14:01.209 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:01.470 [2024-07-11 02:19:51.681452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.470 [2024-07-11 02:19:51.683075] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.470 [2024-07-11 02:19:51.683108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.470 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.730 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.730 "name": "Existed_Raid", 00:14:01.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.730 "strip_size_kb": 64, 00:14:01.730 "state": "configuring", 00:14:01.730 "raid_level": "raid0", 00:14:01.730 "superblock": false, 00:14:01.730 "num_base_bdevs": 2, 00:14:01.730 "num_base_bdevs_discovered": 1, 00:14:01.730 "num_base_bdevs_operational": 2, 00:14:01.730 "base_bdevs_list": [ 00:14:01.730 { 00:14:01.730 "name": "BaseBdev1", 00:14:01.730 "uuid": "0b576c72-a1c4-4268-ac7d-bc1571d1f569", 00:14:01.730 "is_configured": true, 00:14:01.730 "data_offset": 0, 00:14:01.730 "data_size": 65536 00:14:01.730 }, 00:14:01.730 { 00:14:01.730 "name": "BaseBdev2", 00:14:01.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.730 "is_configured": false, 00:14:01.730 "data_offset": 0, 00:14:01.730 "data_size": 0 00:14:01.730 } 00:14:01.730 ] 00:14:01.730 }' 00:14:01.730 02:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.730 02:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.299 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.559 [2024-07-11 02:19:52.825038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.559 [2024-07-11 02:19:52.825074] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25afb70 00:14:02.559 [2024-07-11 02:19:52.825083] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:02.559 [2024-07-11 02:19:52.825323] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23fcc80 00:14:02.559 [2024-07-11 02:19:52.825440] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25afb70 00:14:02.559 [2024-07-11 02:19:52.825450] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25afb70 00:14:02.559 [2024-07-11 02:19:52.825611] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.559 BaseBdev2 00:14:02.559 02:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:02.559 02:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:02.559 02:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:02.559 02:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:02.559 02:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:02.559 02:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:02.559 02:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:02.818 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.076 [ 00:14:03.076 { 00:14:03.076 "name": "BaseBdev2", 00:14:03.076 "aliases": [ 00:14:03.076 "909d2282-123e-40d0-974a-4200888876c9" 00:14:03.076 ], 00:14:03.076 "product_name": "Malloc disk", 00:14:03.076 "block_size": 512, 00:14:03.076 "num_blocks": 65536, 00:14:03.076 "uuid": "909d2282-123e-40d0-974a-4200888876c9", 00:14:03.076 "assigned_rate_limits": { 00:14:03.076 "rw_ios_per_sec": 0, 00:14:03.076 "rw_mbytes_per_sec": 0, 00:14:03.076 "r_mbytes_per_sec": 0, 00:14:03.076 "w_mbytes_per_sec": 0 00:14:03.076 }, 00:14:03.076 "claimed": true, 00:14:03.076 "claim_type": "exclusive_write", 00:14:03.076 "zoned": false, 00:14:03.076 "supported_io_types": { 00:14:03.076 "read": true, 00:14:03.076 "write": true, 00:14:03.076 "unmap": true, 00:14:03.076 "flush": true, 00:14:03.076 "reset": true, 00:14:03.076 "nvme_admin": false, 00:14:03.076 "nvme_io": false, 00:14:03.076 "nvme_io_md": false, 00:14:03.076 "write_zeroes": true, 00:14:03.076 "zcopy": true, 00:14:03.076 "get_zone_info": false, 00:14:03.076 "zone_management": false, 00:14:03.076 "zone_append": false, 00:14:03.076 "compare": false, 00:14:03.076 "compare_and_write": false, 00:14:03.076 "abort": true, 00:14:03.076 "seek_hole": false, 00:14:03.076 "seek_data": false, 00:14:03.076 "copy": true, 00:14:03.076 "nvme_iov_md": false 00:14:03.076 }, 00:14:03.076 "memory_domains": [ 00:14:03.076 { 00:14:03.076 "dma_device_id": "system", 00:14:03.076 "dma_device_type": 1 00:14:03.076 }, 00:14:03.076 { 00:14:03.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.076 "dma_device_type": 2 00:14:03.076 } 00:14:03.076 ], 00:14:03.076 "driver_specific": {} 00:14:03.076 } 00:14:03.076 ] 00:14:03.076 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:03.076 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.077 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.335 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.335 "name": "Existed_Raid", 00:14:03.335 "uuid": "41eb3e70-15eb-442b-b475-9fcb5ed3e2ae", 00:14:03.335 "strip_size_kb": 64, 00:14:03.335 "state": "online", 00:14:03.335 "raid_level": "raid0", 00:14:03.335 "superblock": false, 00:14:03.335 "num_base_bdevs": 2, 00:14:03.335 "num_base_bdevs_discovered": 2, 00:14:03.335 "num_base_bdevs_operational": 2, 00:14:03.335 "base_bdevs_list": [ 00:14:03.335 { 00:14:03.335 "name": "BaseBdev1", 00:14:03.335 "uuid": "0b576c72-a1c4-4268-ac7d-bc1571d1f569", 00:14:03.335 "is_configured": true, 00:14:03.335 "data_offset": 0, 00:14:03.335 "data_size": 65536 00:14:03.335 }, 00:14:03.335 { 00:14:03.335 "name": "BaseBdev2", 00:14:03.335 "uuid": "909d2282-123e-40d0-974a-4200888876c9", 00:14:03.335 "is_configured": true, 00:14:03.335 "data_offset": 0, 00:14:03.335 "data_size": 65536 00:14:03.335 } 00:14:03.335 ] 00:14:03.335 }' 00:14:03.335 02:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.335 02:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:04.275 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:04.536 [2024-07-11 02:19:54.742434] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.536 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:04.536 "name": "Existed_Raid", 00:14:04.536 "aliases": [ 00:14:04.536 "41eb3e70-15eb-442b-b475-9fcb5ed3e2ae" 00:14:04.536 ], 00:14:04.536 "product_name": "Raid Volume", 00:14:04.536 "block_size": 512, 00:14:04.536 "num_blocks": 131072, 00:14:04.536 "uuid": "41eb3e70-15eb-442b-b475-9fcb5ed3e2ae", 00:14:04.536 "assigned_rate_limits": { 00:14:04.536 "rw_ios_per_sec": 0, 00:14:04.536 "rw_mbytes_per_sec": 0, 00:14:04.536 "r_mbytes_per_sec": 0, 00:14:04.536 "w_mbytes_per_sec": 0 00:14:04.536 }, 00:14:04.536 "claimed": false, 00:14:04.536 "zoned": false, 00:14:04.536 "supported_io_types": { 00:14:04.536 "read": true, 00:14:04.536 "write": true, 00:14:04.536 "unmap": true, 00:14:04.536 "flush": true, 00:14:04.536 "reset": true, 00:14:04.536 "nvme_admin": false, 00:14:04.536 "nvme_io": false, 00:14:04.536 "nvme_io_md": false, 00:14:04.536 "write_zeroes": true, 00:14:04.536 "zcopy": false, 00:14:04.536 "get_zone_info": false, 00:14:04.536 "zone_management": false, 00:14:04.537 "zone_append": false, 00:14:04.537 "compare": false, 00:14:04.537 "compare_and_write": false, 00:14:04.537 "abort": false, 00:14:04.537 "seek_hole": false, 00:14:04.537 "seek_data": false, 00:14:04.537 "copy": false, 00:14:04.537 "nvme_iov_md": false 00:14:04.537 }, 00:14:04.537 "memory_domains": [ 00:14:04.537 { 00:14:04.537 "dma_device_id": "system", 00:14:04.537 "dma_device_type": 1 00:14:04.537 }, 00:14:04.537 { 00:14:04.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.537 "dma_device_type": 2 00:14:04.537 }, 00:14:04.537 { 00:14:04.537 "dma_device_id": "system", 00:14:04.537 "dma_device_type": 1 00:14:04.537 }, 00:14:04.537 { 00:14:04.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.537 "dma_device_type": 2 00:14:04.537 } 00:14:04.537 ], 00:14:04.537 "driver_specific": { 00:14:04.537 "raid": { 00:14:04.537 "uuid": "41eb3e70-15eb-442b-b475-9fcb5ed3e2ae", 00:14:04.537 "strip_size_kb": 64, 00:14:04.537 "state": "online", 00:14:04.537 "raid_level": "raid0", 00:14:04.537 "superblock": false, 00:14:04.537 "num_base_bdevs": 2, 00:14:04.537 "num_base_bdevs_discovered": 2, 00:14:04.537 "num_base_bdevs_operational": 2, 00:14:04.537 "base_bdevs_list": [ 00:14:04.537 { 00:14:04.537 "name": "BaseBdev1", 00:14:04.537 "uuid": "0b576c72-a1c4-4268-ac7d-bc1571d1f569", 00:14:04.537 "is_configured": true, 00:14:04.537 "data_offset": 0, 00:14:04.537 "data_size": 65536 00:14:04.537 }, 00:14:04.537 { 00:14:04.537 "name": "BaseBdev2", 00:14:04.537 "uuid": "909d2282-123e-40d0-974a-4200888876c9", 00:14:04.537 "is_configured": true, 00:14:04.537 "data_offset": 0, 00:14:04.537 "data_size": 65536 00:14:04.537 } 00:14:04.537 ] 00:14:04.537 } 00:14:04.537 } 00:14:04.537 }' 00:14:04.537 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.537 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:04.537 BaseBdev2' 00:14:04.537 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:04.537 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:04.537 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:04.797 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:04.797 "name": "BaseBdev1", 00:14:04.797 "aliases": [ 00:14:04.797 "0b576c72-a1c4-4268-ac7d-bc1571d1f569" 00:14:04.797 ], 00:14:04.797 "product_name": "Malloc disk", 00:14:04.797 "block_size": 512, 00:14:04.797 "num_blocks": 65536, 00:14:04.797 "uuid": "0b576c72-a1c4-4268-ac7d-bc1571d1f569", 00:14:04.797 "assigned_rate_limits": { 00:14:04.797 "rw_ios_per_sec": 0, 00:14:04.797 "rw_mbytes_per_sec": 0, 00:14:04.797 "r_mbytes_per_sec": 0, 00:14:04.797 "w_mbytes_per_sec": 0 00:14:04.797 }, 00:14:04.797 "claimed": true, 00:14:04.797 "claim_type": "exclusive_write", 00:14:04.797 "zoned": false, 00:14:04.797 "supported_io_types": { 00:14:04.797 "read": true, 00:14:04.797 "write": true, 00:14:04.797 "unmap": true, 00:14:04.797 "flush": true, 00:14:04.797 "reset": true, 00:14:04.797 "nvme_admin": false, 00:14:04.797 "nvme_io": false, 00:14:04.797 "nvme_io_md": false, 00:14:04.797 "write_zeroes": true, 00:14:04.797 "zcopy": true, 00:14:04.797 "get_zone_info": false, 00:14:04.797 "zone_management": false, 00:14:04.797 "zone_append": false, 00:14:04.797 "compare": false, 00:14:04.797 "compare_and_write": false, 00:14:04.797 "abort": true, 00:14:04.797 "seek_hole": false, 00:14:04.797 "seek_data": false, 00:14:04.797 "copy": true, 00:14:04.797 "nvme_iov_md": false 00:14:04.797 }, 00:14:04.797 "memory_domains": [ 00:14:04.797 { 00:14:04.797 "dma_device_id": "system", 00:14:04.797 "dma_device_type": 1 00:14:04.797 }, 00:14:04.797 { 00:14:04.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.797 "dma_device_type": 2 00:14:04.797 } 00:14:04.797 ], 00:14:04.797 "driver_specific": {} 00:14:04.797 }' 00:14:04.797 02:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.797 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.797 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:04.797 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.797 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.797 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:04.797 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.797 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.057 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.057 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.057 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.057 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.057 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:05.057 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:05.057 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:05.316 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:05.316 "name": "BaseBdev2", 00:14:05.316 "aliases": [ 00:14:05.317 "909d2282-123e-40d0-974a-4200888876c9" 00:14:05.317 ], 00:14:05.317 "product_name": "Malloc disk", 00:14:05.317 "block_size": 512, 00:14:05.317 "num_blocks": 65536, 00:14:05.317 "uuid": "909d2282-123e-40d0-974a-4200888876c9", 00:14:05.317 "assigned_rate_limits": { 00:14:05.317 "rw_ios_per_sec": 0, 00:14:05.317 "rw_mbytes_per_sec": 0, 00:14:05.317 "r_mbytes_per_sec": 0, 00:14:05.317 "w_mbytes_per_sec": 0 00:14:05.317 }, 00:14:05.317 "claimed": true, 00:14:05.317 "claim_type": "exclusive_write", 00:14:05.317 "zoned": false, 00:14:05.317 "supported_io_types": { 00:14:05.317 "read": true, 00:14:05.317 "write": true, 00:14:05.317 "unmap": true, 00:14:05.317 "flush": true, 00:14:05.317 "reset": true, 00:14:05.317 "nvme_admin": false, 00:14:05.317 "nvme_io": false, 00:14:05.317 "nvme_io_md": false, 00:14:05.317 "write_zeroes": true, 00:14:05.317 "zcopy": true, 00:14:05.317 "get_zone_info": false, 00:14:05.317 "zone_management": false, 00:14:05.317 "zone_append": false, 00:14:05.317 "compare": false, 00:14:05.317 "compare_and_write": false, 00:14:05.317 "abort": true, 00:14:05.317 "seek_hole": false, 00:14:05.317 "seek_data": false, 00:14:05.317 "copy": true, 00:14:05.317 "nvme_iov_md": false 00:14:05.317 }, 00:14:05.317 "memory_domains": [ 00:14:05.317 { 00:14:05.317 "dma_device_id": "system", 00:14:05.317 "dma_device_type": 1 00:14:05.317 }, 00:14:05.317 { 00:14:05.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.317 "dma_device_type": 2 00:14:05.317 } 00:14:05.317 ], 00:14:05.317 "driver_specific": {} 00:14:05.317 }' 00:14:05.317 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.317 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.577 02:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.836 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.836 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:06.095 [2024-07-11 02:19:56.514957] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.095 [2024-07-11 02:19:56.514980] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.095 [2024-07-11 02:19:56.515023] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:06.354 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.355 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:06.355 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.355 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.355 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.355 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.355 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.355 02:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.924 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.924 "name": "Existed_Raid", 00:14:06.924 "uuid": "41eb3e70-15eb-442b-b475-9fcb5ed3e2ae", 00:14:06.924 "strip_size_kb": 64, 00:14:06.924 "state": "offline", 00:14:06.924 "raid_level": "raid0", 00:14:06.924 "superblock": false, 00:14:06.924 "num_base_bdevs": 2, 00:14:06.924 "num_base_bdevs_discovered": 1, 00:14:06.924 "num_base_bdevs_operational": 1, 00:14:06.924 "base_bdevs_list": [ 00:14:06.924 { 00:14:06.924 "name": null, 00:14:06.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.924 "is_configured": false, 00:14:06.924 "data_offset": 0, 00:14:06.924 "data_size": 65536 00:14:06.924 }, 00:14:06.924 { 00:14:06.924 "name": "BaseBdev2", 00:14:06.924 "uuid": "909d2282-123e-40d0-974a-4200888876c9", 00:14:06.924 "is_configured": true, 00:14:06.924 "data_offset": 0, 00:14:06.924 "data_size": 65536 00:14:06.924 } 00:14:06.924 ] 00:14:06.924 }' 00:14:06.924 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.924 02:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.493 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:07.493 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:07.493 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.493 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:07.493 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:07.493 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.493 02:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:08.062 [2024-07-11 02:19:58.399674] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.062 [2024-07-11 02:19:58.399726] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25afb70 name Existed_Raid, state offline 00:14:08.062 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:08.062 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:08.062 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.062 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1890844 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1890844 ']' 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1890844 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:08.322 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1890844 00:14:08.582 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:08.582 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:08.582 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1890844' 00:14:08.582 killing process with pid 1890844 00:14:08.582 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1890844 00:14:08.582 [2024-07-11 02:19:58.762320] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.582 02:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1890844 00:14:08.582 [2024-07-11 02:19:58.763838] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:08.842 00:14:08.842 real 0m12.121s 00:14:08.842 user 0m21.514s 00:14:08.842 sys 0m2.215s 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.842 ************************************ 00:14:08.842 END TEST raid_state_function_test 00:14:08.842 ************************************ 00:14:08.842 02:19:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:08.842 02:19:59 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:08.842 02:19:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:08.842 02:19:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.842 02:19:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.842 ************************************ 00:14:08.842 START TEST raid_state_function_test_sb 00:14:08.842 ************************************ 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1892651 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1892651' 00:14:08.842 Process raid pid: 1892651 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1892651 /var/tmp/spdk-raid.sock 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1892651 ']' 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:08.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.842 02:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.103 [2024-07-11 02:19:59.273940] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:09.103 [2024-07-11 02:19:59.274008] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.103 [2024-07-11 02:19:59.413142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.103 [2024-07-11 02:19:59.464502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.103 [2024-07-11 02:19:59.520182] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.103 [2024-07-11 02:19:59.520213] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:10.044 [2024-07-11 02:20:00.369790] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.044 [2024-07-11 02:20:00.369831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.044 [2024-07-11 02:20:00.369838] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.044 [2024-07-11 02:20:00.369846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.044 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.305 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.305 "name": "Existed_Raid", 00:14:10.305 "uuid": "6f3522fb-b470-4a08-96f8-ff1720c57542", 00:14:10.305 "strip_size_kb": 64, 00:14:10.305 "state": "configuring", 00:14:10.305 "raid_level": "raid0", 00:14:10.305 "superblock": true, 00:14:10.305 "num_base_bdevs": 2, 00:14:10.305 "num_base_bdevs_discovered": 0, 00:14:10.305 "num_base_bdevs_operational": 2, 00:14:10.305 "base_bdevs_list": [ 00:14:10.305 { 00:14:10.305 "name": "BaseBdev1", 00:14:10.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.305 "is_configured": false, 00:14:10.305 "data_offset": 0, 00:14:10.305 "data_size": 0 00:14:10.305 }, 00:14:10.305 { 00:14:10.305 "name": "BaseBdev2", 00:14:10.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.305 "is_configured": false, 00:14:10.305 "data_offset": 0, 00:14:10.305 "data_size": 0 00:14:10.305 } 00:14:10.305 ] 00:14:10.305 }' 00:14:10.305 02:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.305 02:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.874 02:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:11.135 [2024-07-11 02:20:01.408449] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.135 [2024-07-11 02:20:01.408471] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232c730 name Existed_Raid, state configuring 00:14:11.135 02:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:11.395 [2024-07-11 02:20:01.665128] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:11.395 [2024-07-11 02:20:01.665149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:11.395 [2024-07-11 02:20:01.665155] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.395 [2024-07-11 02:20:01.665166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.395 02:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:11.964 [2024-07-11 02:20:02.199693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.964 BaseBdev1 00:14:11.964 02:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:11.964 02:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:11.964 02:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:11.964 02:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:11.964 02:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:11.964 02:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:11.964 02:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.534 02:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:12.795 [ 00:14:12.795 { 00:14:12.795 "name": "BaseBdev1", 00:14:12.795 "aliases": [ 00:14:12.795 "8abd9cb6-7123-41f7-802b-eaca82b5d3c3" 00:14:12.795 ], 00:14:12.795 "product_name": "Malloc disk", 00:14:12.795 "block_size": 512, 00:14:12.795 "num_blocks": 65536, 00:14:12.795 "uuid": "8abd9cb6-7123-41f7-802b-eaca82b5d3c3", 00:14:12.795 "assigned_rate_limits": { 00:14:12.795 "rw_ios_per_sec": 0, 00:14:12.795 "rw_mbytes_per_sec": 0, 00:14:12.795 "r_mbytes_per_sec": 0, 00:14:12.795 "w_mbytes_per_sec": 0 00:14:12.795 }, 00:14:12.795 "claimed": true, 00:14:12.795 "claim_type": "exclusive_write", 00:14:12.795 "zoned": false, 00:14:12.795 "supported_io_types": { 00:14:12.795 "read": true, 00:14:12.795 "write": true, 00:14:12.795 "unmap": true, 00:14:12.795 "flush": true, 00:14:12.795 "reset": true, 00:14:12.795 "nvme_admin": false, 00:14:12.795 "nvme_io": false, 00:14:12.795 "nvme_io_md": false, 00:14:12.795 "write_zeroes": true, 00:14:12.795 "zcopy": true, 00:14:12.795 "get_zone_info": false, 00:14:12.795 "zone_management": false, 00:14:12.795 "zone_append": false, 00:14:12.795 "compare": false, 00:14:12.795 "compare_and_write": false, 00:14:12.795 "abort": true, 00:14:12.795 "seek_hole": false, 00:14:12.795 "seek_data": false, 00:14:12.795 "copy": true, 00:14:12.795 "nvme_iov_md": false 00:14:12.795 }, 00:14:12.795 "memory_domains": [ 00:14:12.795 { 00:14:12.795 "dma_device_id": "system", 00:14:12.795 "dma_device_type": 1 00:14:12.795 }, 00:14:12.795 { 00:14:12.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.795 "dma_device_type": 2 00:14:12.795 } 00:14:12.795 ], 00:14:12.795 "driver_specific": {} 00:14:12.795 } 00:14:12.795 ] 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.795 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.795 "name": "Existed_Raid", 00:14:12.795 "uuid": "845d35ef-1954-4727-9746-103b93bc9c18", 00:14:12.795 "strip_size_kb": 64, 00:14:12.795 "state": "configuring", 00:14:12.795 "raid_level": "raid0", 00:14:12.795 "superblock": true, 00:14:12.795 "num_base_bdevs": 2, 00:14:12.795 "num_base_bdevs_discovered": 1, 00:14:12.795 "num_base_bdevs_operational": 2, 00:14:12.796 "base_bdevs_list": [ 00:14:12.796 { 00:14:12.796 "name": "BaseBdev1", 00:14:12.796 "uuid": "8abd9cb6-7123-41f7-802b-eaca82b5d3c3", 00:14:12.796 "is_configured": true, 00:14:12.796 "data_offset": 2048, 00:14:12.796 "data_size": 63488 00:14:12.796 }, 00:14:12.796 { 00:14:12.796 "name": "BaseBdev2", 00:14:12.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.796 "is_configured": false, 00:14:12.796 "data_offset": 0, 00:14:12.796 "data_size": 0 00:14:12.796 } 00:14:12.796 ] 00:14:12.796 }' 00:14:12.796 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.796 02:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.740 02:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:13.740 [2024-07-11 02:20:04.024408] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.740 [2024-07-11 02:20:04.024443] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232c060 name Existed_Raid, state configuring 00:14:13.740 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:14.000 [2024-07-11 02:20:04.269099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.000 [2024-07-11 02:20:04.270259] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.000 [2024-07-11 02:20:04.270286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.000 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.001 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.261 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.261 "name": "Existed_Raid", 00:14:14.261 "uuid": "d7b733a2-5d4a-4f00-806a-2e13dff7424d", 00:14:14.261 "strip_size_kb": 64, 00:14:14.261 "state": "configuring", 00:14:14.261 "raid_level": "raid0", 00:14:14.261 "superblock": true, 00:14:14.261 "num_base_bdevs": 2, 00:14:14.261 "num_base_bdevs_discovered": 1, 00:14:14.261 "num_base_bdevs_operational": 2, 00:14:14.261 "base_bdevs_list": [ 00:14:14.261 { 00:14:14.261 "name": "BaseBdev1", 00:14:14.261 "uuid": "8abd9cb6-7123-41f7-802b-eaca82b5d3c3", 00:14:14.261 "is_configured": true, 00:14:14.261 "data_offset": 2048, 00:14:14.261 "data_size": 63488 00:14:14.261 }, 00:14:14.261 { 00:14:14.261 "name": "BaseBdev2", 00:14:14.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.261 "is_configured": false, 00:14:14.261 "data_offset": 0, 00:14:14.261 "data_size": 0 00:14:14.261 } 00:14:14.261 ] 00:14:14.261 }' 00:14:14.261 02:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.261 02:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.829 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.090 [2024-07-11 02:20:05.367323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.090 [2024-07-11 02:20:05.367443] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24deb70 00:14:15.090 [2024-07-11 02:20:05.367451] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:15.090 [2024-07-11 02:20:05.367565] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x232e7a0 00:14:15.090 [2024-07-11 02:20:05.367647] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24deb70 00:14:15.090 [2024-07-11 02:20:05.367654] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x24deb70 00:14:15.090 [2024-07-11 02:20:05.367715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.090 BaseBdev2 00:14:15.090 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:15.090 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:15.090 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:15.090 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:15.090 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:15.090 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:15.090 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:15.350 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.611 [ 00:14:15.611 { 00:14:15.611 "name": "BaseBdev2", 00:14:15.611 "aliases": [ 00:14:15.611 "5fc54c82-9db4-40d3-af53-85e7f16b73d0" 00:14:15.611 ], 00:14:15.611 "product_name": "Malloc disk", 00:14:15.611 "block_size": 512, 00:14:15.611 "num_blocks": 65536, 00:14:15.611 "uuid": "5fc54c82-9db4-40d3-af53-85e7f16b73d0", 00:14:15.611 "assigned_rate_limits": { 00:14:15.611 "rw_ios_per_sec": 0, 00:14:15.611 "rw_mbytes_per_sec": 0, 00:14:15.611 "r_mbytes_per_sec": 0, 00:14:15.611 "w_mbytes_per_sec": 0 00:14:15.611 }, 00:14:15.611 "claimed": true, 00:14:15.611 "claim_type": "exclusive_write", 00:14:15.611 "zoned": false, 00:14:15.611 "supported_io_types": { 00:14:15.611 "read": true, 00:14:15.611 "write": true, 00:14:15.611 "unmap": true, 00:14:15.611 "flush": true, 00:14:15.611 "reset": true, 00:14:15.611 "nvme_admin": false, 00:14:15.611 "nvme_io": false, 00:14:15.611 "nvme_io_md": false, 00:14:15.611 "write_zeroes": true, 00:14:15.611 "zcopy": true, 00:14:15.611 "get_zone_info": false, 00:14:15.611 "zone_management": false, 00:14:15.611 "zone_append": false, 00:14:15.611 "compare": false, 00:14:15.611 "compare_and_write": false, 00:14:15.611 "abort": true, 00:14:15.611 "seek_hole": false, 00:14:15.611 "seek_data": false, 00:14:15.611 "copy": true, 00:14:15.611 "nvme_iov_md": false 00:14:15.611 }, 00:14:15.611 "memory_domains": [ 00:14:15.611 { 00:14:15.611 "dma_device_id": "system", 00:14:15.611 "dma_device_type": 1 00:14:15.611 }, 00:14:15.611 { 00:14:15.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.611 "dma_device_type": 2 00:14:15.611 } 00:14:15.611 ], 00:14:15.611 "driver_specific": {} 00:14:15.611 } 00:14:15.611 ] 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.611 02:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.873 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.873 "name": "Existed_Raid", 00:14:15.873 "uuid": "d7b733a2-5d4a-4f00-806a-2e13dff7424d", 00:14:15.873 "strip_size_kb": 64, 00:14:15.873 "state": "online", 00:14:15.873 "raid_level": "raid0", 00:14:15.873 "superblock": true, 00:14:15.873 "num_base_bdevs": 2, 00:14:15.873 "num_base_bdevs_discovered": 2, 00:14:15.873 "num_base_bdevs_operational": 2, 00:14:15.873 "base_bdevs_list": [ 00:14:15.873 { 00:14:15.873 "name": "BaseBdev1", 00:14:15.873 "uuid": "8abd9cb6-7123-41f7-802b-eaca82b5d3c3", 00:14:15.873 "is_configured": true, 00:14:15.873 "data_offset": 2048, 00:14:15.873 "data_size": 63488 00:14:15.873 }, 00:14:15.873 { 00:14:15.873 "name": "BaseBdev2", 00:14:15.873 "uuid": "5fc54c82-9db4-40d3-af53-85e7f16b73d0", 00:14:15.873 "is_configured": true, 00:14:15.873 "data_offset": 2048, 00:14:15.873 "data_size": 63488 00:14:15.873 } 00:14:15.873 ] 00:14:15.873 }' 00:14:15.873 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.873 02:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.444 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.444 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:16.444 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:16.444 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:16.444 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:16.444 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:16.445 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:16.445 02:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:16.703 [2024-07-11 02:20:06.983666] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.703 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:16.703 "name": "Existed_Raid", 00:14:16.703 "aliases": [ 00:14:16.703 "d7b733a2-5d4a-4f00-806a-2e13dff7424d" 00:14:16.703 ], 00:14:16.703 "product_name": "Raid Volume", 00:14:16.703 "block_size": 512, 00:14:16.703 "num_blocks": 126976, 00:14:16.703 "uuid": "d7b733a2-5d4a-4f00-806a-2e13dff7424d", 00:14:16.703 "assigned_rate_limits": { 00:14:16.703 "rw_ios_per_sec": 0, 00:14:16.703 "rw_mbytes_per_sec": 0, 00:14:16.703 "r_mbytes_per_sec": 0, 00:14:16.703 "w_mbytes_per_sec": 0 00:14:16.703 }, 00:14:16.703 "claimed": false, 00:14:16.703 "zoned": false, 00:14:16.704 "supported_io_types": { 00:14:16.704 "read": true, 00:14:16.704 "write": true, 00:14:16.704 "unmap": true, 00:14:16.704 "flush": true, 00:14:16.704 "reset": true, 00:14:16.704 "nvme_admin": false, 00:14:16.704 "nvme_io": false, 00:14:16.704 "nvme_io_md": false, 00:14:16.704 "write_zeroes": true, 00:14:16.704 "zcopy": false, 00:14:16.704 "get_zone_info": false, 00:14:16.704 "zone_management": false, 00:14:16.704 "zone_append": false, 00:14:16.704 "compare": false, 00:14:16.704 "compare_and_write": false, 00:14:16.704 "abort": false, 00:14:16.704 "seek_hole": false, 00:14:16.704 "seek_data": false, 00:14:16.704 "copy": false, 00:14:16.704 "nvme_iov_md": false 00:14:16.704 }, 00:14:16.704 "memory_domains": [ 00:14:16.704 { 00:14:16.704 "dma_device_id": "system", 00:14:16.704 "dma_device_type": 1 00:14:16.704 }, 00:14:16.704 { 00:14:16.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.704 "dma_device_type": 2 00:14:16.704 }, 00:14:16.704 { 00:14:16.704 "dma_device_id": "system", 00:14:16.704 "dma_device_type": 1 00:14:16.704 }, 00:14:16.704 { 00:14:16.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.704 "dma_device_type": 2 00:14:16.704 } 00:14:16.704 ], 00:14:16.704 "driver_specific": { 00:14:16.704 "raid": { 00:14:16.704 "uuid": "d7b733a2-5d4a-4f00-806a-2e13dff7424d", 00:14:16.704 "strip_size_kb": 64, 00:14:16.704 "state": "online", 00:14:16.704 "raid_level": "raid0", 00:14:16.704 "superblock": true, 00:14:16.704 "num_base_bdevs": 2, 00:14:16.704 "num_base_bdevs_discovered": 2, 00:14:16.704 "num_base_bdevs_operational": 2, 00:14:16.704 "base_bdevs_list": [ 00:14:16.704 { 00:14:16.704 "name": "BaseBdev1", 00:14:16.704 "uuid": "8abd9cb6-7123-41f7-802b-eaca82b5d3c3", 00:14:16.704 "is_configured": true, 00:14:16.704 "data_offset": 2048, 00:14:16.704 "data_size": 63488 00:14:16.704 }, 00:14:16.704 { 00:14:16.704 "name": "BaseBdev2", 00:14:16.704 "uuid": "5fc54c82-9db4-40d3-af53-85e7f16b73d0", 00:14:16.704 "is_configured": true, 00:14:16.704 "data_offset": 2048, 00:14:16.704 "data_size": 63488 00:14:16.704 } 00:14:16.704 ] 00:14:16.704 } 00:14:16.704 } 00:14:16.704 }' 00:14:16.704 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.704 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:16.704 BaseBdev2' 00:14:16.704 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.704 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:16.704 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.963 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.963 "name": "BaseBdev1", 00:14:16.963 "aliases": [ 00:14:16.963 "8abd9cb6-7123-41f7-802b-eaca82b5d3c3" 00:14:16.963 ], 00:14:16.963 "product_name": "Malloc disk", 00:14:16.963 "block_size": 512, 00:14:16.963 "num_blocks": 65536, 00:14:16.963 "uuid": "8abd9cb6-7123-41f7-802b-eaca82b5d3c3", 00:14:16.963 "assigned_rate_limits": { 00:14:16.963 "rw_ios_per_sec": 0, 00:14:16.963 "rw_mbytes_per_sec": 0, 00:14:16.963 "r_mbytes_per_sec": 0, 00:14:16.963 "w_mbytes_per_sec": 0 00:14:16.963 }, 00:14:16.963 "claimed": true, 00:14:16.963 "claim_type": "exclusive_write", 00:14:16.963 "zoned": false, 00:14:16.963 "supported_io_types": { 00:14:16.963 "read": true, 00:14:16.963 "write": true, 00:14:16.963 "unmap": true, 00:14:16.963 "flush": true, 00:14:16.963 "reset": true, 00:14:16.963 "nvme_admin": false, 00:14:16.963 "nvme_io": false, 00:14:16.963 "nvme_io_md": false, 00:14:16.963 "write_zeroes": true, 00:14:16.963 "zcopy": true, 00:14:16.963 "get_zone_info": false, 00:14:16.963 "zone_management": false, 00:14:16.963 "zone_append": false, 00:14:16.963 "compare": false, 00:14:16.963 "compare_and_write": false, 00:14:16.963 "abort": true, 00:14:16.963 "seek_hole": false, 00:14:16.963 "seek_data": false, 00:14:16.963 "copy": true, 00:14:16.964 "nvme_iov_md": false 00:14:16.964 }, 00:14:16.964 "memory_domains": [ 00:14:16.964 { 00:14:16.964 "dma_device_id": "system", 00:14:16.964 "dma_device_type": 1 00:14:16.964 }, 00:14:16.964 { 00:14:16.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.964 "dma_device_type": 2 00:14:16.964 } 00:14:16.964 ], 00:14:16.964 "driver_specific": {} 00:14:16.964 }' 00:14:16.964 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.964 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.964 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.254 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.535 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.535 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.535 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:17.535 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.535 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.535 "name": "BaseBdev2", 00:14:17.535 "aliases": [ 00:14:17.535 "5fc54c82-9db4-40d3-af53-85e7f16b73d0" 00:14:17.535 ], 00:14:17.535 "product_name": "Malloc disk", 00:14:17.535 "block_size": 512, 00:14:17.535 "num_blocks": 65536, 00:14:17.535 "uuid": "5fc54c82-9db4-40d3-af53-85e7f16b73d0", 00:14:17.535 "assigned_rate_limits": { 00:14:17.535 "rw_ios_per_sec": 0, 00:14:17.535 "rw_mbytes_per_sec": 0, 00:14:17.535 "r_mbytes_per_sec": 0, 00:14:17.535 "w_mbytes_per_sec": 0 00:14:17.535 }, 00:14:17.535 "claimed": true, 00:14:17.535 "claim_type": "exclusive_write", 00:14:17.535 "zoned": false, 00:14:17.535 "supported_io_types": { 00:14:17.535 "read": true, 00:14:17.535 "write": true, 00:14:17.535 "unmap": true, 00:14:17.535 "flush": true, 00:14:17.535 "reset": true, 00:14:17.535 "nvme_admin": false, 00:14:17.535 "nvme_io": false, 00:14:17.535 "nvme_io_md": false, 00:14:17.535 "write_zeroes": true, 00:14:17.535 "zcopy": true, 00:14:17.535 "get_zone_info": false, 00:14:17.535 "zone_management": false, 00:14:17.535 "zone_append": false, 00:14:17.535 "compare": false, 00:14:17.535 "compare_and_write": false, 00:14:17.535 "abort": true, 00:14:17.535 "seek_hole": false, 00:14:17.535 "seek_data": false, 00:14:17.535 "copy": true, 00:14:17.535 "nvme_iov_md": false 00:14:17.535 }, 00:14:17.535 "memory_domains": [ 00:14:17.535 { 00:14:17.535 "dma_device_id": "system", 00:14:17.535 "dma_device_type": 1 00:14:17.535 }, 00:14:17.535 { 00:14:17.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.535 "dma_device_type": 2 00:14:17.535 } 00:14:17.535 ], 00:14:17.535 "driver_specific": {} 00:14:17.535 }' 00:14:17.535 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.796 02:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.796 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.056 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.056 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.056 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:18.316 [2024-07-11 02:20:08.519495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.316 [2024-07-11 02:20:08.519517] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.316 [2024-07-11 02:20:08.519547] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.316 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.575 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.575 "name": "Existed_Raid", 00:14:18.575 "uuid": "d7b733a2-5d4a-4f00-806a-2e13dff7424d", 00:14:18.575 "strip_size_kb": 64, 00:14:18.575 "state": "offline", 00:14:18.575 "raid_level": "raid0", 00:14:18.575 "superblock": true, 00:14:18.575 "num_base_bdevs": 2, 00:14:18.575 "num_base_bdevs_discovered": 1, 00:14:18.575 "num_base_bdevs_operational": 1, 00:14:18.575 "base_bdevs_list": [ 00:14:18.575 { 00:14:18.575 "name": null, 00:14:18.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.575 "is_configured": false, 00:14:18.575 "data_offset": 2048, 00:14:18.575 "data_size": 63488 00:14:18.575 }, 00:14:18.575 { 00:14:18.575 "name": "BaseBdev2", 00:14:18.575 "uuid": "5fc54c82-9db4-40d3-af53-85e7f16b73d0", 00:14:18.575 "is_configured": true, 00:14:18.575 "data_offset": 2048, 00:14:18.575 "data_size": 63488 00:14:18.575 } 00:14:18.575 ] 00:14:18.575 }' 00:14:18.575 02:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.575 02:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.143 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:19.144 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:19.144 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.144 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:19.401 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:19.401 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.401 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:19.659 [2024-07-11 02:20:09.934720] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.659 [2024-07-11 02:20:09.934766] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24deb70 name Existed_Raid, state offline 00:14:19.659 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:19.659 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:19.659 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.659 02:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1892651 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1892651 ']' 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1892651 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1892651 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1892651' 00:14:19.917 killing process with pid 1892651 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1892651 00:14:19.917 [2024-07-11 02:20:10.281531] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.917 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1892651 00:14:19.917 [2024-07-11 02:20:10.283155] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.483 02:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:20.483 00:14:20.483 real 0m11.433s 00:14:20.483 user 0m20.218s 00:14:20.483 sys 0m2.115s 00:14:20.483 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.483 02:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.483 ************************************ 00:14:20.483 END TEST raid_state_function_test_sb 00:14:20.483 ************************************ 00:14:20.483 02:20:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:20.483 02:20:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:20.483 02:20:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:20.483 02:20:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.483 02:20:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.483 ************************************ 00:14:20.483 START TEST raid_superblock_test 00:14:20.483 ************************************ 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1894374 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1894374 /var/tmp/spdk-raid.sock 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1894374 ']' 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.483 02:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:20.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:20.484 02:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.484 02:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.484 [2024-07-11 02:20:10.779975] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:20.484 [2024-07-11 02:20:10.780028] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894374 ] 00:14:20.484 [2024-07-11 02:20:10.900829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.741 [2024-07-11 02:20:10.953427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.741 [2024-07-11 02:20:11.014912] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.741 [2024-07-11 02:20:11.014948] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.308 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:21.567 malloc1 00:14:21.567 02:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.826 [2024-07-11 02:20:12.120308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.826 [2024-07-11 02:20:12.120358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.826 [2024-07-11 02:20:12.120380] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x171dde0 00:14:21.826 [2024-07-11 02:20:12.120393] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.826 [2024-07-11 02:20:12.122049] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.826 [2024-07-11 02:20:12.122078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.826 pt1 00:14:21.826 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:21.826 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:21.826 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:21.826 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:21.827 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:21.827 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.827 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.827 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.827 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:22.086 malloc2 00:14:22.086 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.345 [2024-07-11 02:20:12.626373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.345 [2024-07-11 02:20:12.626418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.345 [2024-07-11 02:20:12.626434] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1715380 00:14:22.345 [2024-07-11 02:20:12.626447] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.345 [2024-07-11 02:20:12.627800] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.345 [2024-07-11 02:20:12.627828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.345 pt2 00:14:22.345 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:22.345 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:22.345 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:22.604 [2024-07-11 02:20:12.875047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:22.604 [2024-07-11 02:20:12.876175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.604 [2024-07-11 02:20:12.876305] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x171f9e0 00:14:22.604 [2024-07-11 02:20:12.876317] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:22.604 [2024-07-11 02:20:12.876493] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1716a70 00:14:22.604 [2024-07-11 02:20:12.876626] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x171f9e0 00:14:22.604 [2024-07-11 02:20:12.876635] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x171f9e0 00:14:22.604 [2024-07-11 02:20:12.876723] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.604 02:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.863 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.863 "name": "raid_bdev1", 00:14:22.863 "uuid": "f4c29bb7-a53b-4566-879f-721e2bb5a3d9", 00:14:22.863 "strip_size_kb": 64, 00:14:22.863 "state": "online", 00:14:22.863 "raid_level": "raid0", 00:14:22.863 "superblock": true, 00:14:22.863 "num_base_bdevs": 2, 00:14:22.863 "num_base_bdevs_discovered": 2, 00:14:22.863 "num_base_bdevs_operational": 2, 00:14:22.863 "base_bdevs_list": [ 00:14:22.863 { 00:14:22.863 "name": "pt1", 00:14:22.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.863 "is_configured": true, 00:14:22.863 "data_offset": 2048, 00:14:22.863 "data_size": 63488 00:14:22.863 }, 00:14:22.863 { 00:14:22.863 "name": "pt2", 00:14:22.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.863 "is_configured": true, 00:14:22.863 "data_offset": 2048, 00:14:22.863 "data_size": 63488 00:14:22.863 } 00:14:22.863 ] 00:14:22.863 }' 00:14:22.863 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.863 02:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.430 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:23.688 [2024-07-11 02:20:13.902117] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.688 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:23.688 "name": "raid_bdev1", 00:14:23.688 "aliases": [ 00:14:23.688 "f4c29bb7-a53b-4566-879f-721e2bb5a3d9" 00:14:23.688 ], 00:14:23.688 "product_name": "Raid Volume", 00:14:23.688 "block_size": 512, 00:14:23.688 "num_blocks": 126976, 00:14:23.688 "uuid": "f4c29bb7-a53b-4566-879f-721e2bb5a3d9", 00:14:23.688 "assigned_rate_limits": { 00:14:23.688 "rw_ios_per_sec": 0, 00:14:23.688 "rw_mbytes_per_sec": 0, 00:14:23.688 "r_mbytes_per_sec": 0, 00:14:23.688 "w_mbytes_per_sec": 0 00:14:23.688 }, 00:14:23.688 "claimed": false, 00:14:23.688 "zoned": false, 00:14:23.688 "supported_io_types": { 00:14:23.688 "read": true, 00:14:23.688 "write": true, 00:14:23.688 "unmap": true, 00:14:23.688 "flush": true, 00:14:23.688 "reset": true, 00:14:23.688 "nvme_admin": false, 00:14:23.688 "nvme_io": false, 00:14:23.688 "nvme_io_md": false, 00:14:23.688 "write_zeroes": true, 00:14:23.688 "zcopy": false, 00:14:23.688 "get_zone_info": false, 00:14:23.688 "zone_management": false, 00:14:23.688 "zone_append": false, 00:14:23.688 "compare": false, 00:14:23.688 "compare_and_write": false, 00:14:23.688 "abort": false, 00:14:23.688 "seek_hole": false, 00:14:23.688 "seek_data": false, 00:14:23.688 "copy": false, 00:14:23.688 "nvme_iov_md": false 00:14:23.688 }, 00:14:23.688 "memory_domains": [ 00:14:23.688 { 00:14:23.688 "dma_device_id": "system", 00:14:23.688 "dma_device_type": 1 00:14:23.688 }, 00:14:23.688 { 00:14:23.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.688 "dma_device_type": 2 00:14:23.688 }, 00:14:23.688 { 00:14:23.688 "dma_device_id": "system", 00:14:23.688 "dma_device_type": 1 00:14:23.688 }, 00:14:23.688 { 00:14:23.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.688 "dma_device_type": 2 00:14:23.688 } 00:14:23.688 ], 00:14:23.688 "driver_specific": { 00:14:23.688 "raid": { 00:14:23.688 "uuid": "f4c29bb7-a53b-4566-879f-721e2bb5a3d9", 00:14:23.688 "strip_size_kb": 64, 00:14:23.688 "state": "online", 00:14:23.688 "raid_level": "raid0", 00:14:23.688 "superblock": true, 00:14:23.688 "num_base_bdevs": 2, 00:14:23.688 "num_base_bdevs_discovered": 2, 00:14:23.688 "num_base_bdevs_operational": 2, 00:14:23.688 "base_bdevs_list": [ 00:14:23.688 { 00:14:23.688 "name": "pt1", 00:14:23.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.688 "is_configured": true, 00:14:23.688 "data_offset": 2048, 00:14:23.688 "data_size": 63488 00:14:23.688 }, 00:14:23.688 { 00:14:23.688 "name": "pt2", 00:14:23.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.688 "is_configured": true, 00:14:23.688 "data_offset": 2048, 00:14:23.688 "data_size": 63488 00:14:23.688 } 00:14:23.688 ] 00:14:23.688 } 00:14:23.688 } 00:14:23.688 }' 00:14:23.688 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.689 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:23.689 pt2' 00:14:23.689 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.689 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.689 02:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:23.947 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.947 "name": "pt1", 00:14:23.947 "aliases": [ 00:14:23.947 "00000000-0000-0000-0000-000000000001" 00:14:23.947 ], 00:14:23.947 "product_name": "passthru", 00:14:23.947 "block_size": 512, 00:14:23.947 "num_blocks": 65536, 00:14:23.947 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.947 "assigned_rate_limits": { 00:14:23.947 "rw_ios_per_sec": 0, 00:14:23.947 "rw_mbytes_per_sec": 0, 00:14:23.947 "r_mbytes_per_sec": 0, 00:14:23.947 "w_mbytes_per_sec": 0 00:14:23.947 }, 00:14:23.947 "claimed": true, 00:14:23.947 "claim_type": "exclusive_write", 00:14:23.947 "zoned": false, 00:14:23.947 "supported_io_types": { 00:14:23.947 "read": true, 00:14:23.947 "write": true, 00:14:23.947 "unmap": true, 00:14:23.947 "flush": true, 00:14:23.947 "reset": true, 00:14:23.947 "nvme_admin": false, 00:14:23.947 "nvme_io": false, 00:14:23.947 "nvme_io_md": false, 00:14:23.947 "write_zeroes": true, 00:14:23.947 "zcopy": true, 00:14:23.947 "get_zone_info": false, 00:14:23.947 "zone_management": false, 00:14:23.947 "zone_append": false, 00:14:23.947 "compare": false, 00:14:23.947 "compare_and_write": false, 00:14:23.947 "abort": true, 00:14:23.947 "seek_hole": false, 00:14:23.947 "seek_data": false, 00:14:23.947 "copy": true, 00:14:23.947 "nvme_iov_md": false 00:14:23.947 }, 00:14:23.947 "memory_domains": [ 00:14:23.947 { 00:14:23.947 "dma_device_id": "system", 00:14:23.947 "dma_device_type": 1 00:14:23.947 }, 00:14:23.947 { 00:14:23.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.947 "dma_device_type": 2 00:14:23.947 } 00:14:23.947 ], 00:14:23.947 "driver_specific": { 00:14:23.947 "passthru": { 00:14:23.947 "name": "pt1", 00:14:23.947 "base_bdev_name": "malloc1" 00:14:23.947 } 00:14:23.947 } 00:14:23.947 }' 00:14:23.947 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.947 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.947 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.947 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.947 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:24.207 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.466 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.466 "name": "pt2", 00:14:24.466 "aliases": [ 00:14:24.466 "00000000-0000-0000-0000-000000000002" 00:14:24.466 ], 00:14:24.466 "product_name": "passthru", 00:14:24.466 "block_size": 512, 00:14:24.466 "num_blocks": 65536, 00:14:24.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.466 "assigned_rate_limits": { 00:14:24.466 "rw_ios_per_sec": 0, 00:14:24.466 "rw_mbytes_per_sec": 0, 00:14:24.466 "r_mbytes_per_sec": 0, 00:14:24.466 "w_mbytes_per_sec": 0 00:14:24.466 }, 00:14:24.466 "claimed": true, 00:14:24.466 "claim_type": "exclusive_write", 00:14:24.466 "zoned": false, 00:14:24.466 "supported_io_types": { 00:14:24.466 "read": true, 00:14:24.466 "write": true, 00:14:24.466 "unmap": true, 00:14:24.466 "flush": true, 00:14:24.466 "reset": true, 00:14:24.466 "nvme_admin": false, 00:14:24.466 "nvme_io": false, 00:14:24.466 "nvme_io_md": false, 00:14:24.466 "write_zeroes": true, 00:14:24.466 "zcopy": true, 00:14:24.466 "get_zone_info": false, 00:14:24.466 "zone_management": false, 00:14:24.466 "zone_append": false, 00:14:24.466 "compare": false, 00:14:24.466 "compare_and_write": false, 00:14:24.466 "abort": true, 00:14:24.466 "seek_hole": false, 00:14:24.466 "seek_data": false, 00:14:24.466 "copy": true, 00:14:24.466 "nvme_iov_md": false 00:14:24.466 }, 00:14:24.466 "memory_domains": [ 00:14:24.466 { 00:14:24.466 "dma_device_id": "system", 00:14:24.466 "dma_device_type": 1 00:14:24.466 }, 00:14:24.466 { 00:14:24.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.466 "dma_device_type": 2 00:14:24.466 } 00:14:24.466 ], 00:14:24.466 "driver_specific": { 00:14:24.466 "passthru": { 00:14:24.466 "name": "pt2", 00:14:24.466 "base_bdev_name": "malloc2" 00:14:24.466 } 00:14:24.466 } 00:14:24.466 }' 00:14:24.466 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.466 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.725 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.725 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.725 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.725 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.725 02:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.725 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.725 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.725 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.725 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.984 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.984 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:24.984 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:24.984 [2024-07-11 02:20:15.382046] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.984 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f4c29bb7-a53b-4566-879f-721e2bb5a3d9 00:14:24.984 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f4c29bb7-a53b-4566-879f-721e2bb5a3d9 ']' 00:14:24.984 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.243 [2024-07-11 02:20:15.626441] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.243 [2024-07-11 02:20:15.626464] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.243 [2024-07-11 02:20:15.626520] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.243 [2024-07-11 02:20:15.626564] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.243 [2024-07-11 02:20:15.626575] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x171f9e0 name raid_bdev1, state offline 00:14:25.243 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.243 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:25.501 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:25.501 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:25.501 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.501 02:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:25.760 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.761 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:26.020 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:26.020 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.279 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.538 [2024-07-11 02:20:16.845627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:26.538 [2024-07-11 02:20:16.846937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:26.538 [2024-07-11 02:20:16.846990] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:26.538 [2024-07-11 02:20:16.847030] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:26.538 [2024-07-11 02:20:16.847049] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.538 [2024-07-11 02:20:16.847060] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1718450 name raid_bdev1, state configuring 00:14:26.538 request: 00:14:26.538 { 00:14:26.538 "name": "raid_bdev1", 00:14:26.538 "raid_level": "raid0", 00:14:26.538 "base_bdevs": [ 00:14:26.538 "malloc1", 00:14:26.538 "malloc2" 00:14:26.538 ], 00:14:26.538 "strip_size_kb": 64, 00:14:26.538 "superblock": false, 00:14:26.538 "method": "bdev_raid_create", 00:14:26.538 "req_id": 1 00:14:26.538 } 00:14:26.538 Got JSON-RPC error response 00:14:26.538 response: 00:14:26.538 { 00:14:26.538 "code": -17, 00:14:26.538 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:26.538 } 00:14:26.538 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:26.539 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.539 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.539 02:20:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.539 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.539 02:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:26.797 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:26.797 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:26.797 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.081 [2024-07-11 02:20:17.334838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.081 [2024-07-11 02:20:17.334883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.081 [2024-07-11 02:20:17.334905] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1716bd0 00:14:27.081 [2024-07-11 02:20:17.334917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.081 [2024-07-11 02:20:17.336518] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.081 [2024-07-11 02:20:17.336548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.081 [2024-07-11 02:20:17.336622] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:27.081 [2024-07-11 02:20:17.336651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.081 pt1 00:14:27.081 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.082 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.340 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.340 "name": "raid_bdev1", 00:14:27.340 "uuid": "f4c29bb7-a53b-4566-879f-721e2bb5a3d9", 00:14:27.340 "strip_size_kb": 64, 00:14:27.340 "state": "configuring", 00:14:27.340 "raid_level": "raid0", 00:14:27.340 "superblock": true, 00:14:27.340 "num_base_bdevs": 2, 00:14:27.340 "num_base_bdevs_discovered": 1, 00:14:27.340 "num_base_bdevs_operational": 2, 00:14:27.340 "base_bdevs_list": [ 00:14:27.340 { 00:14:27.340 "name": "pt1", 00:14:27.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.340 "is_configured": true, 00:14:27.340 "data_offset": 2048, 00:14:27.340 "data_size": 63488 00:14:27.340 }, 00:14:27.340 { 00:14:27.340 "name": null, 00:14:27.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.340 "is_configured": false, 00:14:27.340 "data_offset": 2048, 00:14:27.340 "data_size": 63488 00:14:27.340 } 00:14:27.340 ] 00:14:27.340 }' 00:14:27.340 02:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.340 02:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.908 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:27.908 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:27.908 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:27.908 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.168 [2024-07-11 02:20:18.429810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.168 [2024-07-11 02:20:18.429859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.168 [2024-07-11 02:20:18.429881] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x171a270 00:14:28.168 [2024-07-11 02:20:18.429894] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.168 [2024-07-11 02:20:18.430228] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.168 [2024-07-11 02:20:18.430250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.168 [2024-07-11 02:20:18.430311] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.168 [2024-07-11 02:20:18.430332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.168 [2024-07-11 02:20:18.430426] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x156a900 00:14:28.168 [2024-07-11 02:20:18.430436] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:28.168 [2024-07-11 02:20:18.430607] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1603360 00:14:28.168 [2024-07-11 02:20:18.430727] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x156a900 00:14:28.168 [2024-07-11 02:20:18.430736] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x156a900 00:14:28.168 [2024-07-11 02:20:18.430844] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.168 pt2 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.168 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.427 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.427 "name": "raid_bdev1", 00:14:28.427 "uuid": "f4c29bb7-a53b-4566-879f-721e2bb5a3d9", 00:14:28.427 "strip_size_kb": 64, 00:14:28.427 "state": "online", 00:14:28.427 "raid_level": "raid0", 00:14:28.427 "superblock": true, 00:14:28.427 "num_base_bdevs": 2, 00:14:28.427 "num_base_bdevs_discovered": 2, 00:14:28.427 "num_base_bdevs_operational": 2, 00:14:28.427 "base_bdevs_list": [ 00:14:28.427 { 00:14:28.427 "name": "pt1", 00:14:28.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.427 "is_configured": true, 00:14:28.427 "data_offset": 2048, 00:14:28.427 "data_size": 63488 00:14:28.427 }, 00:14:28.427 { 00:14:28.427 "name": "pt2", 00:14:28.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.427 "is_configured": true, 00:14:28.427 "data_offset": 2048, 00:14:28.427 "data_size": 63488 00:14:28.427 } 00:14:28.427 ] 00:14:28.427 }' 00:14:28.427 02:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.427 02:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:28.994 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:29.253 [2024-07-11 02:20:19.524977] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.253 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:29.253 "name": "raid_bdev1", 00:14:29.253 "aliases": [ 00:14:29.253 "f4c29bb7-a53b-4566-879f-721e2bb5a3d9" 00:14:29.253 ], 00:14:29.253 "product_name": "Raid Volume", 00:14:29.253 "block_size": 512, 00:14:29.253 "num_blocks": 126976, 00:14:29.253 "uuid": "f4c29bb7-a53b-4566-879f-721e2bb5a3d9", 00:14:29.253 "assigned_rate_limits": { 00:14:29.253 "rw_ios_per_sec": 0, 00:14:29.253 "rw_mbytes_per_sec": 0, 00:14:29.253 "r_mbytes_per_sec": 0, 00:14:29.253 "w_mbytes_per_sec": 0 00:14:29.253 }, 00:14:29.253 "claimed": false, 00:14:29.253 "zoned": false, 00:14:29.253 "supported_io_types": { 00:14:29.253 "read": true, 00:14:29.253 "write": true, 00:14:29.253 "unmap": true, 00:14:29.253 "flush": true, 00:14:29.253 "reset": true, 00:14:29.253 "nvme_admin": false, 00:14:29.253 "nvme_io": false, 00:14:29.253 "nvme_io_md": false, 00:14:29.253 "write_zeroes": true, 00:14:29.253 "zcopy": false, 00:14:29.253 "get_zone_info": false, 00:14:29.253 "zone_management": false, 00:14:29.253 "zone_append": false, 00:14:29.253 "compare": false, 00:14:29.253 "compare_and_write": false, 00:14:29.253 "abort": false, 00:14:29.253 "seek_hole": false, 00:14:29.253 "seek_data": false, 00:14:29.253 "copy": false, 00:14:29.253 "nvme_iov_md": false 00:14:29.253 }, 00:14:29.253 "memory_domains": [ 00:14:29.253 { 00:14:29.253 "dma_device_id": "system", 00:14:29.253 "dma_device_type": 1 00:14:29.253 }, 00:14:29.253 { 00:14:29.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.253 "dma_device_type": 2 00:14:29.253 }, 00:14:29.253 { 00:14:29.253 "dma_device_id": "system", 00:14:29.253 "dma_device_type": 1 00:14:29.253 }, 00:14:29.253 { 00:14:29.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.253 "dma_device_type": 2 00:14:29.253 } 00:14:29.253 ], 00:14:29.253 "driver_specific": { 00:14:29.253 "raid": { 00:14:29.253 "uuid": "f4c29bb7-a53b-4566-879f-721e2bb5a3d9", 00:14:29.253 "strip_size_kb": 64, 00:14:29.253 "state": "online", 00:14:29.253 "raid_level": "raid0", 00:14:29.253 "superblock": true, 00:14:29.253 "num_base_bdevs": 2, 00:14:29.253 "num_base_bdevs_discovered": 2, 00:14:29.253 "num_base_bdevs_operational": 2, 00:14:29.253 "base_bdevs_list": [ 00:14:29.253 { 00:14:29.253 "name": "pt1", 00:14:29.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.253 "is_configured": true, 00:14:29.253 "data_offset": 2048, 00:14:29.253 "data_size": 63488 00:14:29.253 }, 00:14:29.253 { 00:14:29.253 "name": "pt2", 00:14:29.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.253 "is_configured": true, 00:14:29.253 "data_offset": 2048, 00:14:29.253 "data_size": 63488 00:14:29.253 } 00:14:29.253 ] 00:14:29.253 } 00:14:29.253 } 00:14:29.253 }' 00:14:29.253 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.253 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:29.253 pt2' 00:14:29.253 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.253 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:29.253 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.512 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.512 "name": "pt1", 00:14:29.512 "aliases": [ 00:14:29.512 "00000000-0000-0000-0000-000000000001" 00:14:29.512 ], 00:14:29.512 "product_name": "passthru", 00:14:29.512 "block_size": 512, 00:14:29.512 "num_blocks": 65536, 00:14:29.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.512 "assigned_rate_limits": { 00:14:29.512 "rw_ios_per_sec": 0, 00:14:29.512 "rw_mbytes_per_sec": 0, 00:14:29.512 "r_mbytes_per_sec": 0, 00:14:29.512 "w_mbytes_per_sec": 0 00:14:29.512 }, 00:14:29.512 "claimed": true, 00:14:29.512 "claim_type": "exclusive_write", 00:14:29.512 "zoned": false, 00:14:29.512 "supported_io_types": { 00:14:29.512 "read": true, 00:14:29.512 "write": true, 00:14:29.512 "unmap": true, 00:14:29.513 "flush": true, 00:14:29.513 "reset": true, 00:14:29.513 "nvme_admin": false, 00:14:29.513 "nvme_io": false, 00:14:29.513 "nvme_io_md": false, 00:14:29.513 "write_zeroes": true, 00:14:29.513 "zcopy": true, 00:14:29.513 "get_zone_info": false, 00:14:29.513 "zone_management": false, 00:14:29.513 "zone_append": false, 00:14:29.513 "compare": false, 00:14:29.513 "compare_and_write": false, 00:14:29.513 "abort": true, 00:14:29.513 "seek_hole": false, 00:14:29.513 "seek_data": false, 00:14:29.513 "copy": true, 00:14:29.513 "nvme_iov_md": false 00:14:29.513 }, 00:14:29.513 "memory_domains": [ 00:14:29.513 { 00:14:29.513 "dma_device_id": "system", 00:14:29.513 "dma_device_type": 1 00:14:29.513 }, 00:14:29.513 { 00:14:29.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.513 "dma_device_type": 2 00:14:29.513 } 00:14:29.513 ], 00:14:29.513 "driver_specific": { 00:14:29.513 "passthru": { 00:14:29.513 "name": "pt1", 00:14:29.513 "base_bdev_name": "malloc1" 00:14:29.513 } 00:14:29.513 } 00:14:29.513 }' 00:14:29.513 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.513 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.513 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.513 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.771 02:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.771 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:29.771 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.771 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.771 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:29.771 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.771 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.031 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.031 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.031 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:30.031 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.031 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.031 "name": "pt2", 00:14:30.031 "aliases": [ 00:14:30.031 "00000000-0000-0000-0000-000000000002" 00:14:30.031 ], 00:14:30.031 "product_name": "passthru", 00:14:30.031 "block_size": 512, 00:14:30.031 "num_blocks": 65536, 00:14:30.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.031 "assigned_rate_limits": { 00:14:30.031 "rw_ios_per_sec": 0, 00:14:30.031 "rw_mbytes_per_sec": 0, 00:14:30.031 "r_mbytes_per_sec": 0, 00:14:30.031 "w_mbytes_per_sec": 0 00:14:30.031 }, 00:14:30.031 "claimed": true, 00:14:30.031 "claim_type": "exclusive_write", 00:14:30.031 "zoned": false, 00:14:30.031 "supported_io_types": { 00:14:30.031 "read": true, 00:14:30.031 "write": true, 00:14:30.031 "unmap": true, 00:14:30.031 "flush": true, 00:14:30.031 "reset": true, 00:14:30.031 "nvme_admin": false, 00:14:30.031 "nvme_io": false, 00:14:30.031 "nvme_io_md": false, 00:14:30.031 "write_zeroes": true, 00:14:30.031 "zcopy": true, 00:14:30.031 "get_zone_info": false, 00:14:30.031 "zone_management": false, 00:14:30.031 "zone_append": false, 00:14:30.031 "compare": false, 00:14:30.031 "compare_and_write": false, 00:14:30.031 "abort": true, 00:14:30.031 "seek_hole": false, 00:14:30.031 "seek_data": false, 00:14:30.031 "copy": true, 00:14:30.031 "nvme_iov_md": false 00:14:30.031 }, 00:14:30.031 "memory_domains": [ 00:14:30.031 { 00:14:30.031 "dma_device_id": "system", 00:14:30.031 "dma_device_type": 1 00:14:30.031 }, 00:14:30.031 { 00:14:30.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.031 "dma_device_type": 2 00:14:30.031 } 00:14:30.031 ], 00:14:30.031 "driver_specific": { 00:14:30.031 "passthru": { 00:14:30.031 "name": "pt2", 00:14:30.031 "base_bdev_name": "malloc2" 00:14:30.031 } 00:14:30.031 } 00:14:30.031 }' 00:14:30.031 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.290 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.549 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.549 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.549 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:30.549 02:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:30.808 [2024-07-11 02:20:20.996935] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f4c29bb7-a53b-4566-879f-721e2bb5a3d9 '!=' f4c29bb7-a53b-4566-879f-721e2bb5a3d9 ']' 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1894374 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1894374 ']' 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1894374 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1894374 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1894374' 00:14:30.808 killing process with pid 1894374 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1894374 00:14:30.808 [2024-07-11 02:20:21.060637] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.808 [2024-07-11 02:20:21.060701] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.808 [2024-07-11 02:20:21.060746] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.808 [2024-07-11 02:20:21.060764] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x156a900 name raid_bdev1, state offline 00:14:30.808 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1894374 00:14:30.808 [2024-07-11 02:20:21.079594] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.068 02:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:31.068 00:14:31.068 real 0m10.563s 00:14:31.068 user 0m18.839s 00:14:31.068 sys 0m1.973s 00:14:31.068 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.068 02:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.068 ************************************ 00:14:31.068 END TEST raid_superblock_test 00:14:31.068 ************************************ 00:14:31.068 02:20:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:31.068 02:20:21 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:31.068 02:20:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:31.068 02:20:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.068 02:20:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.068 ************************************ 00:14:31.068 START TEST raid_read_error_test 00:14:31.068 ************************************ 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.OASTuFpOS2 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1895909 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1895909 /var/tmp/spdk-raid.sock 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1895909 ']' 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.068 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.068 [2024-07-11 02:20:21.446832] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:31.068 [2024-07-11 02:20:21.446897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895909 ] 00:14:31.327 [2024-07-11 02:20:21.584662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.327 [2024-07-11 02:20:21.637514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.327 [2024-07-11 02:20:21.706951] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.327 [2024-07-11 02:20:21.706980] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.585 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.585 02:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:31.585 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:31.585 02:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:31.844 BaseBdev1_malloc 00:14:31.844 02:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:32.103 true 00:14:32.103 02:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:32.362 [2024-07-11 02:20:22.648734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:32.362 [2024-07-11 02:20:22.648781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.362 [2024-07-11 02:20:22.648801] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb31330 00:14:32.362 [2024-07-11 02:20:22.648814] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.362 [2024-07-11 02:20:22.650472] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.362 [2024-07-11 02:20:22.650500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.362 BaseBdev1 00:14:32.362 02:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:32.362 02:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.621 BaseBdev2_malloc 00:14:32.621 02:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:32.880 true 00:14:32.880 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:33.139 [2024-07-11 02:20:23.396407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:33.139 [2024-07-11 02:20:23.396449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.139 [2024-07-11 02:20:23.396470] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb2ab40 00:14:33.139 [2024-07-11 02:20:23.396482] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.139 [2024-07-11 02:20:23.398050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.139 [2024-07-11 02:20:23.398078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:33.139 BaseBdev2 00:14:33.139 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:33.397 [2024-07-11 02:20:23.633066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.397 [2024-07-11 02:20:23.634409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.397 [2024-07-11 02:20:23.634590] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb2bd50 00:14:33.397 [2024-07-11 02:20:23.634603] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.397 [2024-07-11 02:20:23.634811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb2a150 00:14:33.397 [2024-07-11 02:20:23.634955] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb2bd50 00:14:33.397 [2024-07-11 02:20:23.634965] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb2bd50 00:14:33.397 [2024-07-11 02:20:23.635070] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.397 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.655 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.655 "name": "raid_bdev1", 00:14:33.655 "uuid": "b523a2f2-149a-41cc-878d-835ffb1ea350", 00:14:33.655 "strip_size_kb": 64, 00:14:33.655 "state": "online", 00:14:33.655 "raid_level": "raid0", 00:14:33.655 "superblock": true, 00:14:33.655 "num_base_bdevs": 2, 00:14:33.655 "num_base_bdevs_discovered": 2, 00:14:33.655 "num_base_bdevs_operational": 2, 00:14:33.655 "base_bdevs_list": [ 00:14:33.655 { 00:14:33.655 "name": "BaseBdev1", 00:14:33.655 "uuid": "3fa6a4a0-3593-5aa9-8007-71b90afa60c6", 00:14:33.655 "is_configured": true, 00:14:33.655 "data_offset": 2048, 00:14:33.655 "data_size": 63488 00:14:33.655 }, 00:14:33.655 { 00:14:33.655 "name": "BaseBdev2", 00:14:33.655 "uuid": "bae2ad08-e107-5750-9724-69eaa6d93d62", 00:14:33.655 "is_configured": true, 00:14:33.655 "data_offset": 2048, 00:14:33.655 "data_size": 63488 00:14:33.655 } 00:14:33.655 ] 00:14:33.655 }' 00:14:33.655 02:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.655 02:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.222 02:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:34.222 02:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:34.222 [2024-07-11 02:20:24.623944] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb2b7e0 00:14:35.156 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.416 02:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.675 02:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.675 "name": "raid_bdev1", 00:14:35.675 "uuid": "b523a2f2-149a-41cc-878d-835ffb1ea350", 00:14:35.675 "strip_size_kb": 64, 00:14:35.675 "state": "online", 00:14:35.675 "raid_level": "raid0", 00:14:35.675 "superblock": true, 00:14:35.675 "num_base_bdevs": 2, 00:14:35.675 "num_base_bdevs_discovered": 2, 00:14:35.675 "num_base_bdevs_operational": 2, 00:14:35.675 "base_bdevs_list": [ 00:14:35.675 { 00:14:35.675 "name": "BaseBdev1", 00:14:35.675 "uuid": "3fa6a4a0-3593-5aa9-8007-71b90afa60c6", 00:14:35.675 "is_configured": true, 00:14:35.675 "data_offset": 2048, 00:14:35.675 "data_size": 63488 00:14:35.675 }, 00:14:35.675 { 00:14:35.675 "name": "BaseBdev2", 00:14:35.675 "uuid": "bae2ad08-e107-5750-9724-69eaa6d93d62", 00:14:35.675 "is_configured": true, 00:14:35.675 "data_offset": 2048, 00:14:35.675 "data_size": 63488 00:14:35.675 } 00:14:35.675 ] 00:14:35.675 }' 00:14:35.675 02:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.675 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.241 02:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:36.500 [2024-07-11 02:20:26.849211] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.500 [2024-07-11 02:20:26.849241] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.500 [2024-07-11 02:20:26.852389] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.500 [2024-07-11 02:20:26.852420] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.500 [2024-07-11 02:20:26.852449] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.500 [2024-07-11 02:20:26.852461] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb2bd50 name raid_bdev1, state offline 00:14:36.500 0 00:14:36.500 02:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1895909 00:14:36.500 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1895909 ']' 00:14:36.500 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1895909 00:14:36.500 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:36.500 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.500 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1895909 00:14:36.759 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:36.759 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:36.759 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1895909' 00:14:36.759 killing process with pid 1895909 00:14:36.759 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1895909 00:14:36.759 [2024-07-11 02:20:26.930728] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.759 02:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1895909 00:14:36.759 [2024-07-11 02:20:26.941672] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.OASTuFpOS2 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:14:36.759 00:14:36.759 real 0m5.787s 00:14:36.759 user 0m9.393s 00:14:36.759 sys 0m1.082s 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.759 02:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.759 ************************************ 00:14:36.759 END TEST raid_read_error_test 00:14:36.759 ************************************ 00:14:37.018 02:20:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:37.018 02:20:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:37.018 02:20:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:37.018 02:20:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.018 02:20:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:37.018 ************************************ 00:14:37.018 START TEST raid_write_error_test 00:14:37.018 ************************************ 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.KZOJlKeuiB 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1896872 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1896872 /var/tmp/spdk-raid.sock 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1896872 ']' 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:37.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.018 02:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.018 [2024-07-11 02:20:27.325417] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:37.018 [2024-07-11 02:20:27.325490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896872 ] 00:14:37.277 [2024-07-11 02:20:27.466148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.277 [2024-07-11 02:20:27.518709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.277 [2024-07-11 02:20:27.578054] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.277 [2024-07-11 02:20:27.578082] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.843 02:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.843 02:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:37.843 02:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:37.843 02:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:38.102 BaseBdev1_malloc 00:14:38.102 02:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:38.360 true 00:14:38.360 02:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:38.618 [2024-07-11 02:20:28.920367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:38.618 [2024-07-11 02:20:28.920411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.618 [2024-07-11 02:20:28.920429] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x295c330 00:14:38.618 [2024-07-11 02:20:28.920441] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.618 [2024-07-11 02:20:28.922089] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.618 [2024-07-11 02:20:28.922115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.618 BaseBdev1 00:14:38.618 02:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:38.618 02:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:38.877 BaseBdev2_malloc 00:14:38.877 02:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:39.135 true 00:14:39.135 02:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:39.393 [2024-07-11 02:20:29.666683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:39.393 [2024-07-11 02:20:29.666724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.393 [2024-07-11 02:20:29.666742] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2955b40 00:14:39.393 [2024-07-11 02:20:29.666754] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.393 [2024-07-11 02:20:29.668090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.393 [2024-07-11 02:20:29.668116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.393 BaseBdev2 00:14:39.393 02:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:39.959 [2024-07-11 02:20:30.156018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.959 [2024-07-11 02:20:30.157404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.959 [2024-07-11 02:20:30.157593] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2956d50 00:14:39.959 [2024-07-11 02:20:30.157606] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.959 [2024-07-11 02:20:30.157825] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2955150 00:14:39.959 [2024-07-11 02:20:30.157976] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2956d50 00:14:39.959 [2024-07-11 02:20:30.157991] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2956d50 00:14:39.959 [2024-07-11 02:20:30.158099] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.959 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:39.959 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.959 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.959 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.960 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.221 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.221 "name": "raid_bdev1", 00:14:40.221 "uuid": "6a552592-9811-4d64-bcff-4237b352a02f", 00:14:40.221 "strip_size_kb": 64, 00:14:40.221 "state": "online", 00:14:40.221 "raid_level": "raid0", 00:14:40.221 "superblock": true, 00:14:40.221 "num_base_bdevs": 2, 00:14:40.221 "num_base_bdevs_discovered": 2, 00:14:40.221 "num_base_bdevs_operational": 2, 00:14:40.221 "base_bdevs_list": [ 00:14:40.221 { 00:14:40.221 "name": "BaseBdev1", 00:14:40.221 "uuid": "dfb0a505-7a42-5e37-b944-e426d326c48f", 00:14:40.221 "is_configured": true, 00:14:40.221 "data_offset": 2048, 00:14:40.221 "data_size": 63488 00:14:40.221 }, 00:14:40.221 { 00:14:40.221 "name": "BaseBdev2", 00:14:40.221 "uuid": "3667b97c-5926-5074-8df2-d313cae1ad07", 00:14:40.221 "is_configured": true, 00:14:40.221 "data_offset": 2048, 00:14:40.221 "data_size": 63488 00:14:40.221 } 00:14:40.221 ] 00:14:40.221 }' 00:14:40.221 02:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.221 02:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.852 02:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:40.852 02:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:40.852 [2024-07-11 02:20:31.146892] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x29567e0 00:14:41.787 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.046 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.305 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.305 "name": "raid_bdev1", 00:14:42.305 "uuid": "6a552592-9811-4d64-bcff-4237b352a02f", 00:14:42.305 "strip_size_kb": 64, 00:14:42.305 "state": "online", 00:14:42.305 "raid_level": "raid0", 00:14:42.305 "superblock": true, 00:14:42.305 "num_base_bdevs": 2, 00:14:42.305 "num_base_bdevs_discovered": 2, 00:14:42.305 "num_base_bdevs_operational": 2, 00:14:42.305 "base_bdevs_list": [ 00:14:42.305 { 00:14:42.305 "name": "BaseBdev1", 00:14:42.305 "uuid": "dfb0a505-7a42-5e37-b944-e426d326c48f", 00:14:42.305 "is_configured": true, 00:14:42.305 "data_offset": 2048, 00:14:42.305 "data_size": 63488 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "name": "BaseBdev2", 00:14:42.305 "uuid": "3667b97c-5926-5074-8df2-d313cae1ad07", 00:14:42.305 "is_configured": true, 00:14:42.305 "data_offset": 2048, 00:14:42.305 "data_size": 63488 00:14:42.305 } 00:14:42.305 ] 00:14:42.305 }' 00:14:42.305 02:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.305 02:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.874 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:43.133 [2024-07-11 02:20:33.363805] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.133 [2024-07-11 02:20:33.363847] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.133 [2024-07-11 02:20:33.367002] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.133 [2024-07-11 02:20:33.367033] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.133 [2024-07-11 02:20:33.367063] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.133 [2024-07-11 02:20:33.367074] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2956d50 name raid_bdev1, state offline 00:14:43.133 0 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1896872 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1896872 ']' 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1896872 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1896872 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1896872' 00:14:43.133 killing process with pid 1896872 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1896872 00:14:43.133 [2024-07-11 02:20:33.448460] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.133 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1896872 00:14:43.133 [2024-07-11 02:20:33.458918] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.KZOJlKeuiB 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:14:43.392 00:14:43.392 real 0m6.434s 00:14:43.392 user 0m10.134s 00:14:43.392 sys 0m1.142s 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.392 02:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.392 ************************************ 00:14:43.392 END TEST raid_write_error_test 00:14:43.392 ************************************ 00:14:43.392 02:20:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:43.392 02:20:33 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:43.392 02:20:33 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:43.392 02:20:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:43.392 02:20:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.392 02:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.392 ************************************ 00:14:43.392 START TEST raid_state_function_test 00:14:43.392 ************************************ 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1897756 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1897756' 00:14:43.392 Process raid pid: 1897756 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1897756 /var/tmp/spdk-raid.sock 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1897756 ']' 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.392 02:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.651 [2024-07-11 02:20:33.833577] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:43.651 [2024-07-11 02:20:33.833639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.651 [2024-07-11 02:20:33.953170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.651 [2024-07-11 02:20:34.005114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.651 [2024-07-11 02:20:34.069002] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.651 [2024-07-11 02:20:34.069038] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:44.587 [2024-07-11 02:20:34.932258] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.587 [2024-07-11 02:20:34.932300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.587 [2024-07-11 02:20:34.932311] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.587 [2024-07-11 02:20:34.932323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.587 02:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.847 02:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.847 "name": "Existed_Raid", 00:14:44.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.847 "strip_size_kb": 64, 00:14:44.847 "state": "configuring", 00:14:44.847 "raid_level": "concat", 00:14:44.847 "superblock": false, 00:14:44.847 "num_base_bdevs": 2, 00:14:44.847 "num_base_bdevs_discovered": 0, 00:14:44.847 "num_base_bdevs_operational": 2, 00:14:44.847 "base_bdevs_list": [ 00:14:44.847 { 00:14:44.847 "name": "BaseBdev1", 00:14:44.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.847 "is_configured": false, 00:14:44.847 "data_offset": 0, 00:14:44.847 "data_size": 0 00:14:44.847 }, 00:14:44.847 { 00:14:44.847 "name": "BaseBdev2", 00:14:44.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.847 "is_configured": false, 00:14:44.847 "data_offset": 0, 00:14:44.847 "data_size": 0 00:14:44.847 } 00:14:44.847 ] 00:14:44.847 }' 00:14:44.847 02:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.847 02:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.783 02:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:46.042 [2024-07-11 02:20:36.311782] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.042 [2024-07-11 02:20:36.311817] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b05730 name Existed_Raid, state configuring 00:14:46.042 02:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:46.301 [2024-07-11 02:20:36.560444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.301 [2024-07-11 02:20:36.560477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.301 [2024-07-11 02:20:36.560487] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.301 [2024-07-11 02:20:36.560500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.301 02:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.560 [2024-07-11 02:20:36.819208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.560 BaseBdev1 00:14:46.560 02:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:46.560 02:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:46.560 02:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.560 02:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:46.560 02:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.560 02:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.560 02:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.818 02:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.078 [ 00:14:47.078 { 00:14:47.078 "name": "BaseBdev1", 00:14:47.078 "aliases": [ 00:14:47.078 "caae0136-07ef-410b-8385-ff8b10f16747" 00:14:47.078 ], 00:14:47.078 "product_name": "Malloc disk", 00:14:47.078 "block_size": 512, 00:14:47.078 "num_blocks": 65536, 00:14:47.078 "uuid": "caae0136-07ef-410b-8385-ff8b10f16747", 00:14:47.078 "assigned_rate_limits": { 00:14:47.078 "rw_ios_per_sec": 0, 00:14:47.078 "rw_mbytes_per_sec": 0, 00:14:47.078 "r_mbytes_per_sec": 0, 00:14:47.078 "w_mbytes_per_sec": 0 00:14:47.078 }, 00:14:47.078 "claimed": true, 00:14:47.078 "claim_type": "exclusive_write", 00:14:47.078 "zoned": false, 00:14:47.078 "supported_io_types": { 00:14:47.078 "read": true, 00:14:47.078 "write": true, 00:14:47.078 "unmap": true, 00:14:47.078 "flush": true, 00:14:47.078 "reset": true, 00:14:47.078 "nvme_admin": false, 00:14:47.078 "nvme_io": false, 00:14:47.078 "nvme_io_md": false, 00:14:47.078 "write_zeroes": true, 00:14:47.078 "zcopy": true, 00:14:47.078 "get_zone_info": false, 00:14:47.078 "zone_management": false, 00:14:47.078 "zone_append": false, 00:14:47.078 "compare": false, 00:14:47.078 "compare_and_write": false, 00:14:47.078 "abort": true, 00:14:47.078 "seek_hole": false, 00:14:47.078 "seek_data": false, 00:14:47.078 "copy": true, 00:14:47.078 "nvme_iov_md": false 00:14:47.078 }, 00:14:47.078 "memory_domains": [ 00:14:47.078 { 00:14:47.078 "dma_device_id": "system", 00:14:47.078 "dma_device_type": 1 00:14:47.078 }, 00:14:47.078 { 00:14:47.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.078 "dma_device_type": 2 00:14:47.078 } 00:14:47.078 ], 00:14:47.078 "driver_specific": {} 00:14:47.078 } 00:14:47.078 ] 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.078 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.337 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.337 "name": "Existed_Raid", 00:14:47.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.337 "strip_size_kb": 64, 00:14:47.337 "state": "configuring", 00:14:47.337 "raid_level": "concat", 00:14:47.337 "superblock": false, 00:14:47.337 "num_base_bdevs": 2, 00:14:47.337 "num_base_bdevs_discovered": 1, 00:14:47.337 "num_base_bdevs_operational": 2, 00:14:47.337 "base_bdevs_list": [ 00:14:47.337 { 00:14:47.337 "name": "BaseBdev1", 00:14:47.337 "uuid": "caae0136-07ef-410b-8385-ff8b10f16747", 00:14:47.337 "is_configured": true, 00:14:47.337 "data_offset": 0, 00:14:47.337 "data_size": 65536 00:14:47.337 }, 00:14:47.337 { 00:14:47.337 "name": "BaseBdev2", 00:14:47.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.337 "is_configured": false, 00:14:47.337 "data_offset": 0, 00:14:47.337 "data_size": 0 00:14:47.337 } 00:14:47.337 ] 00:14:47.337 }' 00:14:47.337 02:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.337 02:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.905 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:48.164 [2024-07-11 02:20:38.427468] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.164 [2024-07-11 02:20:38.427509] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b05060 name Existed_Raid, state configuring 00:14:48.164 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:48.424 [2024-07-11 02:20:38.676154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.424 [2024-07-11 02:20:38.677548] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.424 [2024-07-11 02:20:38.677579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.424 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.683 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.683 "name": "Existed_Raid", 00:14:48.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.683 "strip_size_kb": 64, 00:14:48.683 "state": "configuring", 00:14:48.683 "raid_level": "concat", 00:14:48.683 "superblock": false, 00:14:48.683 "num_base_bdevs": 2, 00:14:48.683 "num_base_bdevs_discovered": 1, 00:14:48.683 "num_base_bdevs_operational": 2, 00:14:48.683 "base_bdevs_list": [ 00:14:48.683 { 00:14:48.683 "name": "BaseBdev1", 00:14:48.683 "uuid": "caae0136-07ef-410b-8385-ff8b10f16747", 00:14:48.683 "is_configured": true, 00:14:48.683 "data_offset": 0, 00:14:48.683 "data_size": 65536 00:14:48.683 }, 00:14:48.683 { 00:14:48.683 "name": "BaseBdev2", 00:14:48.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.683 "is_configured": false, 00:14:48.683 "data_offset": 0, 00:14:48.683 "data_size": 0 00:14:48.683 } 00:14:48.683 ] 00:14:48.683 }' 00:14:48.683 02:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.683 02:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.251 02:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.510 [2024-07-11 02:20:39.826506] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.510 [2024-07-11 02:20:39.826544] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1cb7b70 00:14:49.510 [2024-07-11 02:20:39.826553] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:49.510 [2024-07-11 02:20:39.826804] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b04c80 00:14:49.511 [2024-07-11 02:20:39.826914] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1cb7b70 00:14:49.511 [2024-07-11 02:20:39.826925] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1cb7b70 00:14:49.511 [2024-07-11 02:20:39.827086] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.511 BaseBdev2 00:14:49.511 02:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:49.511 02:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:49.511 02:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.511 02:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:49.511 02:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.511 02:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.511 02:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.769 02:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.028 [ 00:14:50.028 { 00:14:50.028 "name": "BaseBdev2", 00:14:50.028 "aliases": [ 00:14:50.028 "d440cc0a-484a-43ba-b3e8-af9be55320ca" 00:14:50.028 ], 00:14:50.028 "product_name": "Malloc disk", 00:14:50.028 "block_size": 512, 00:14:50.028 "num_blocks": 65536, 00:14:50.028 "uuid": "d440cc0a-484a-43ba-b3e8-af9be55320ca", 00:14:50.028 "assigned_rate_limits": { 00:14:50.028 "rw_ios_per_sec": 0, 00:14:50.028 "rw_mbytes_per_sec": 0, 00:14:50.028 "r_mbytes_per_sec": 0, 00:14:50.028 "w_mbytes_per_sec": 0 00:14:50.028 }, 00:14:50.028 "claimed": true, 00:14:50.028 "claim_type": "exclusive_write", 00:14:50.028 "zoned": false, 00:14:50.028 "supported_io_types": { 00:14:50.028 "read": true, 00:14:50.028 "write": true, 00:14:50.028 "unmap": true, 00:14:50.028 "flush": true, 00:14:50.028 "reset": true, 00:14:50.028 "nvme_admin": false, 00:14:50.028 "nvme_io": false, 00:14:50.028 "nvme_io_md": false, 00:14:50.028 "write_zeroes": true, 00:14:50.028 "zcopy": true, 00:14:50.028 "get_zone_info": false, 00:14:50.028 "zone_management": false, 00:14:50.028 "zone_append": false, 00:14:50.028 "compare": false, 00:14:50.028 "compare_and_write": false, 00:14:50.028 "abort": true, 00:14:50.028 "seek_hole": false, 00:14:50.028 "seek_data": false, 00:14:50.028 "copy": true, 00:14:50.028 "nvme_iov_md": false 00:14:50.028 }, 00:14:50.028 "memory_domains": [ 00:14:50.028 { 00:14:50.028 "dma_device_id": "system", 00:14:50.028 "dma_device_type": 1 00:14:50.028 }, 00:14:50.028 { 00:14:50.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.028 "dma_device_type": 2 00:14:50.028 } 00:14:50.028 ], 00:14:50.028 "driver_specific": {} 00:14:50.028 } 00:14:50.028 ] 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.028 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.287 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.287 "name": "Existed_Raid", 00:14:50.287 "uuid": "61598d3c-caa9-4af6-bf49-d61fc1a52a96", 00:14:50.287 "strip_size_kb": 64, 00:14:50.287 "state": "online", 00:14:50.287 "raid_level": "concat", 00:14:50.287 "superblock": false, 00:14:50.287 "num_base_bdevs": 2, 00:14:50.287 "num_base_bdevs_discovered": 2, 00:14:50.287 "num_base_bdevs_operational": 2, 00:14:50.287 "base_bdevs_list": [ 00:14:50.287 { 00:14:50.287 "name": "BaseBdev1", 00:14:50.287 "uuid": "caae0136-07ef-410b-8385-ff8b10f16747", 00:14:50.287 "is_configured": true, 00:14:50.287 "data_offset": 0, 00:14:50.287 "data_size": 65536 00:14:50.287 }, 00:14:50.287 { 00:14:50.287 "name": "BaseBdev2", 00:14:50.287 "uuid": "d440cc0a-484a-43ba-b3e8-af9be55320ca", 00:14:50.287 "is_configured": true, 00:14:50.287 "data_offset": 0, 00:14:50.287 "data_size": 65536 00:14:50.287 } 00:14:50.287 ] 00:14:50.287 }' 00:14:50.287 02:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.287 02:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:51.221 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:51.479 [2024-07-11 02:20:41.703756] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.479 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:51.479 "name": "Existed_Raid", 00:14:51.479 "aliases": [ 00:14:51.479 "61598d3c-caa9-4af6-bf49-d61fc1a52a96" 00:14:51.479 ], 00:14:51.479 "product_name": "Raid Volume", 00:14:51.479 "block_size": 512, 00:14:51.479 "num_blocks": 131072, 00:14:51.479 "uuid": "61598d3c-caa9-4af6-bf49-d61fc1a52a96", 00:14:51.479 "assigned_rate_limits": { 00:14:51.479 "rw_ios_per_sec": 0, 00:14:51.479 "rw_mbytes_per_sec": 0, 00:14:51.479 "r_mbytes_per_sec": 0, 00:14:51.479 "w_mbytes_per_sec": 0 00:14:51.479 }, 00:14:51.479 "claimed": false, 00:14:51.479 "zoned": false, 00:14:51.479 "supported_io_types": { 00:14:51.479 "read": true, 00:14:51.479 "write": true, 00:14:51.479 "unmap": true, 00:14:51.479 "flush": true, 00:14:51.479 "reset": true, 00:14:51.479 "nvme_admin": false, 00:14:51.479 "nvme_io": false, 00:14:51.479 "nvme_io_md": false, 00:14:51.479 "write_zeroes": true, 00:14:51.479 "zcopy": false, 00:14:51.479 "get_zone_info": false, 00:14:51.479 "zone_management": false, 00:14:51.479 "zone_append": false, 00:14:51.479 "compare": false, 00:14:51.479 "compare_and_write": false, 00:14:51.479 "abort": false, 00:14:51.479 "seek_hole": false, 00:14:51.479 "seek_data": false, 00:14:51.479 "copy": false, 00:14:51.479 "nvme_iov_md": false 00:14:51.479 }, 00:14:51.479 "memory_domains": [ 00:14:51.480 { 00:14:51.480 "dma_device_id": "system", 00:14:51.480 "dma_device_type": 1 00:14:51.480 }, 00:14:51.480 { 00:14:51.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.480 "dma_device_type": 2 00:14:51.480 }, 00:14:51.480 { 00:14:51.480 "dma_device_id": "system", 00:14:51.480 "dma_device_type": 1 00:14:51.480 }, 00:14:51.480 { 00:14:51.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.480 "dma_device_type": 2 00:14:51.480 } 00:14:51.480 ], 00:14:51.480 "driver_specific": { 00:14:51.480 "raid": { 00:14:51.480 "uuid": "61598d3c-caa9-4af6-bf49-d61fc1a52a96", 00:14:51.480 "strip_size_kb": 64, 00:14:51.480 "state": "online", 00:14:51.480 "raid_level": "concat", 00:14:51.480 "superblock": false, 00:14:51.480 "num_base_bdevs": 2, 00:14:51.480 "num_base_bdevs_discovered": 2, 00:14:51.480 "num_base_bdevs_operational": 2, 00:14:51.480 "base_bdevs_list": [ 00:14:51.480 { 00:14:51.480 "name": "BaseBdev1", 00:14:51.480 "uuid": "caae0136-07ef-410b-8385-ff8b10f16747", 00:14:51.480 "is_configured": true, 00:14:51.480 "data_offset": 0, 00:14:51.480 "data_size": 65536 00:14:51.480 }, 00:14:51.480 { 00:14:51.480 "name": "BaseBdev2", 00:14:51.480 "uuid": "d440cc0a-484a-43ba-b3e8-af9be55320ca", 00:14:51.480 "is_configured": true, 00:14:51.480 "data_offset": 0, 00:14:51.480 "data_size": 65536 00:14:51.480 } 00:14:51.480 ] 00:14:51.480 } 00:14:51.480 } 00:14:51.480 }' 00:14:51.480 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:51.480 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:51.480 BaseBdev2' 00:14:51.480 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.480 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:51.480 02:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:52.047 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:52.047 "name": "BaseBdev1", 00:14:52.047 "aliases": [ 00:14:52.047 "caae0136-07ef-410b-8385-ff8b10f16747" 00:14:52.047 ], 00:14:52.047 "product_name": "Malloc disk", 00:14:52.047 "block_size": 512, 00:14:52.047 "num_blocks": 65536, 00:14:52.047 "uuid": "caae0136-07ef-410b-8385-ff8b10f16747", 00:14:52.047 "assigned_rate_limits": { 00:14:52.047 "rw_ios_per_sec": 0, 00:14:52.047 "rw_mbytes_per_sec": 0, 00:14:52.047 "r_mbytes_per_sec": 0, 00:14:52.047 "w_mbytes_per_sec": 0 00:14:52.047 }, 00:14:52.047 "claimed": true, 00:14:52.047 "claim_type": "exclusive_write", 00:14:52.047 "zoned": false, 00:14:52.047 "supported_io_types": { 00:14:52.047 "read": true, 00:14:52.047 "write": true, 00:14:52.047 "unmap": true, 00:14:52.047 "flush": true, 00:14:52.047 "reset": true, 00:14:52.047 "nvme_admin": false, 00:14:52.047 "nvme_io": false, 00:14:52.047 "nvme_io_md": false, 00:14:52.047 "write_zeroes": true, 00:14:52.047 "zcopy": true, 00:14:52.047 "get_zone_info": false, 00:14:52.047 "zone_management": false, 00:14:52.047 "zone_append": false, 00:14:52.047 "compare": false, 00:14:52.047 "compare_and_write": false, 00:14:52.047 "abort": true, 00:14:52.047 "seek_hole": false, 00:14:52.047 "seek_data": false, 00:14:52.047 "copy": true, 00:14:52.047 "nvme_iov_md": false 00:14:52.047 }, 00:14:52.047 "memory_domains": [ 00:14:52.047 { 00:14:52.047 "dma_device_id": "system", 00:14:52.047 "dma_device_type": 1 00:14:52.047 }, 00:14:52.047 { 00:14:52.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.047 "dma_device_type": 2 00:14:52.047 } 00:14:52.047 ], 00:14:52.047 "driver_specific": {} 00:14:52.047 }' 00:14:52.047 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.047 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.047 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:52.047 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.047 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:52.305 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:52.563 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:52.563 "name": "BaseBdev2", 00:14:52.563 "aliases": [ 00:14:52.563 "d440cc0a-484a-43ba-b3e8-af9be55320ca" 00:14:52.563 ], 00:14:52.563 "product_name": "Malloc disk", 00:14:52.563 "block_size": 512, 00:14:52.563 "num_blocks": 65536, 00:14:52.563 "uuid": "d440cc0a-484a-43ba-b3e8-af9be55320ca", 00:14:52.563 "assigned_rate_limits": { 00:14:52.563 "rw_ios_per_sec": 0, 00:14:52.563 "rw_mbytes_per_sec": 0, 00:14:52.563 "r_mbytes_per_sec": 0, 00:14:52.563 "w_mbytes_per_sec": 0 00:14:52.563 }, 00:14:52.563 "claimed": true, 00:14:52.563 "claim_type": "exclusive_write", 00:14:52.563 "zoned": false, 00:14:52.563 "supported_io_types": { 00:14:52.563 "read": true, 00:14:52.563 "write": true, 00:14:52.563 "unmap": true, 00:14:52.563 "flush": true, 00:14:52.563 "reset": true, 00:14:52.563 "nvme_admin": false, 00:14:52.563 "nvme_io": false, 00:14:52.563 "nvme_io_md": false, 00:14:52.563 "write_zeroes": true, 00:14:52.563 "zcopy": true, 00:14:52.563 "get_zone_info": false, 00:14:52.563 "zone_management": false, 00:14:52.563 "zone_append": false, 00:14:52.563 "compare": false, 00:14:52.563 "compare_and_write": false, 00:14:52.563 "abort": true, 00:14:52.563 "seek_hole": false, 00:14:52.563 "seek_data": false, 00:14:52.563 "copy": true, 00:14:52.563 "nvme_iov_md": false 00:14:52.563 }, 00:14:52.563 "memory_domains": [ 00:14:52.563 { 00:14:52.563 "dma_device_id": "system", 00:14:52.563 "dma_device_type": 1 00:14:52.563 }, 00:14:52.563 { 00:14:52.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.563 "dma_device_type": 2 00:14:52.563 } 00:14:52.563 ], 00:14:52.563 "driver_specific": {} 00:14:52.563 }' 00:14:52.563 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.563 02:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.821 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:52.821 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.821 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.821 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:52.821 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.080 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.080 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:53.080 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.080 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.080 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:53.080 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:53.649 [2024-07-11 02:20:43.893529] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.649 [2024-07-11 02:20:43.893560] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.649 [2024-07-11 02:20:43.893604] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.649 02:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.218 02:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.218 "name": "Existed_Raid", 00:14:54.218 "uuid": "61598d3c-caa9-4af6-bf49-d61fc1a52a96", 00:14:54.218 "strip_size_kb": 64, 00:14:54.218 "state": "offline", 00:14:54.218 "raid_level": "concat", 00:14:54.218 "superblock": false, 00:14:54.218 "num_base_bdevs": 2, 00:14:54.218 "num_base_bdevs_discovered": 1, 00:14:54.218 "num_base_bdevs_operational": 1, 00:14:54.218 "base_bdevs_list": [ 00:14:54.218 { 00:14:54.218 "name": null, 00:14:54.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.218 "is_configured": false, 00:14:54.218 "data_offset": 0, 00:14:54.218 "data_size": 65536 00:14:54.218 }, 00:14:54.218 { 00:14:54.218 "name": "BaseBdev2", 00:14:54.218 "uuid": "d440cc0a-484a-43ba-b3e8-af9be55320ca", 00:14:54.218 "is_configured": true, 00:14:54.218 "data_offset": 0, 00:14:54.218 "data_size": 65536 00:14:54.218 } 00:14:54.218 ] 00:14:54.218 }' 00:14:54.218 02:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.218 02:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.155 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:55.155 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:55.155 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.155 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:55.155 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:55.155 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.155 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:55.413 [2024-07-11 02:20:45.808474] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.413 [2024-07-11 02:20:45.808525] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cb7b70 name Existed_Raid, state offline 00:14:55.671 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:55.671 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:55.671 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.671 02:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1897756 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1897756 ']' 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1897756 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1897756 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1897756' 00:14:56.240 killing process with pid 1897756 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1897756 00:14:56.240 [2024-07-11 02:20:46.409598] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1897756 00:14:56.240 [2024-07-11 02:20:46.410465] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:56.240 00:14:56.240 real 0m12.826s 00:14:56.240 user 0m23.011s 00:14:56.240 sys 0m2.329s 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.240 02:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.240 ************************************ 00:14:56.240 END TEST raid_state_function_test 00:14:56.240 ************************************ 00:14:56.240 02:20:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:56.240 02:20:46 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:56.240 02:20:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:56.240 02:20:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.240 02:20:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.499 ************************************ 00:14:56.499 START TEST raid_state_function_test_sb 00:14:56.499 ************************************ 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:56.499 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1899801 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1899801' 00:14:56.500 Process raid pid: 1899801 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1899801 /var/tmp/spdk-raid.sock 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1899801 ']' 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:56.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.500 02:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.500 [2024-07-11 02:20:46.750262] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:56.500 [2024-07-11 02:20:46.750326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.500 [2024-07-11 02:20:46.886873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.759 [2024-07-11 02:20:46.940117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.759 [2024-07-11 02:20:47.004503] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.759 [2024-07-11 02:20:47.004553] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.327 02:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.327 02:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:57.327 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:57.586 [2024-07-11 02:20:47.908873] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.586 [2024-07-11 02:20:47.908914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.586 [2024-07-11 02:20:47.908924] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.586 [2024-07-11 02:20:47.908936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.586 02:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.846 02:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.846 "name": "Existed_Raid", 00:14:57.846 "uuid": "afc0936c-8511-47b6-9978-d3d543eff230", 00:14:57.846 "strip_size_kb": 64, 00:14:57.846 "state": "configuring", 00:14:57.846 "raid_level": "concat", 00:14:57.846 "superblock": true, 00:14:57.846 "num_base_bdevs": 2, 00:14:57.846 "num_base_bdevs_discovered": 0, 00:14:57.846 "num_base_bdevs_operational": 2, 00:14:57.846 "base_bdevs_list": [ 00:14:57.846 { 00:14:57.846 "name": "BaseBdev1", 00:14:57.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.846 "is_configured": false, 00:14:57.846 "data_offset": 0, 00:14:57.846 "data_size": 0 00:14:57.846 }, 00:14:57.846 { 00:14:57.846 "name": "BaseBdev2", 00:14:57.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.846 "is_configured": false, 00:14:57.846 "data_offset": 0, 00:14:57.846 "data_size": 0 00:14:57.846 } 00:14:57.846 ] 00:14:57.846 }' 00:14:57.846 02:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.846 02:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.413 02:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.672 [2024-07-11 02:20:48.895348] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.672 [2024-07-11 02:20:48.895374] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b4730 name Existed_Raid, state configuring 00:14:58.672 02:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:58.931 [2024-07-11 02:20:49.148030] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.931 [2024-07-11 02:20:49.148057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.931 [2024-07-11 02:20:49.148067] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.931 [2024-07-11 02:20:49.148083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.931 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.190 [2024-07-11 02:20:49.406445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.190 BaseBdev1 00:14:59.190 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:59.190 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:59.190 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:59.190 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:59.190 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:59.190 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:59.190 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:59.450 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.708 [ 00:14:59.708 { 00:14:59.708 "name": "BaseBdev1", 00:14:59.708 "aliases": [ 00:14:59.708 "c9cd9850-5704-4f61-b947-872f03c73dea" 00:14:59.708 ], 00:14:59.708 "product_name": "Malloc disk", 00:14:59.708 "block_size": 512, 00:14:59.708 "num_blocks": 65536, 00:14:59.708 "uuid": "c9cd9850-5704-4f61-b947-872f03c73dea", 00:14:59.708 "assigned_rate_limits": { 00:14:59.708 "rw_ios_per_sec": 0, 00:14:59.708 "rw_mbytes_per_sec": 0, 00:14:59.708 "r_mbytes_per_sec": 0, 00:14:59.708 "w_mbytes_per_sec": 0 00:14:59.708 }, 00:14:59.708 "claimed": true, 00:14:59.708 "claim_type": "exclusive_write", 00:14:59.708 "zoned": false, 00:14:59.708 "supported_io_types": { 00:14:59.708 "read": true, 00:14:59.708 "write": true, 00:14:59.708 "unmap": true, 00:14:59.708 "flush": true, 00:14:59.708 "reset": true, 00:14:59.708 "nvme_admin": false, 00:14:59.708 "nvme_io": false, 00:14:59.708 "nvme_io_md": false, 00:14:59.708 "write_zeroes": true, 00:14:59.708 "zcopy": true, 00:14:59.708 "get_zone_info": false, 00:14:59.708 "zone_management": false, 00:14:59.708 "zone_append": false, 00:14:59.708 "compare": false, 00:14:59.708 "compare_and_write": false, 00:14:59.708 "abort": true, 00:14:59.708 "seek_hole": false, 00:14:59.708 "seek_data": false, 00:14:59.708 "copy": true, 00:14:59.708 "nvme_iov_md": false 00:14:59.708 }, 00:14:59.708 "memory_domains": [ 00:14:59.708 { 00:14:59.708 "dma_device_id": "system", 00:14:59.708 "dma_device_type": 1 00:14:59.708 }, 00:14:59.708 { 00:14:59.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.708 "dma_device_type": 2 00:14:59.708 } 00:14:59.708 ], 00:14:59.708 "driver_specific": {} 00:14:59.708 } 00:14:59.708 ] 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.708 02:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.998 02:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.998 "name": "Existed_Raid", 00:14:59.998 "uuid": "6ffc8ec1-b8fa-4a09-b930-967e29a9f272", 00:14:59.998 "strip_size_kb": 64, 00:14:59.998 "state": "configuring", 00:14:59.998 "raid_level": "concat", 00:14:59.998 "superblock": true, 00:14:59.998 "num_base_bdevs": 2, 00:14:59.998 "num_base_bdevs_discovered": 1, 00:14:59.998 "num_base_bdevs_operational": 2, 00:14:59.998 "base_bdevs_list": [ 00:14:59.998 { 00:14:59.998 "name": "BaseBdev1", 00:14:59.998 "uuid": "c9cd9850-5704-4f61-b947-872f03c73dea", 00:14:59.998 "is_configured": true, 00:14:59.998 "data_offset": 2048, 00:14:59.998 "data_size": 63488 00:14:59.998 }, 00:14:59.998 { 00:14:59.998 "name": "BaseBdev2", 00:14:59.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.998 "is_configured": false, 00:14:59.998 "data_offset": 0, 00:14:59.998 "data_size": 0 00:14:59.998 } 00:14:59.998 ] 00:14:59.998 }' 00:14:59.998 02:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.998 02:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.567 02:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.567 [2024-07-11 02:20:50.978607] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.567 [2024-07-11 02:20:50.978643] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b4060 name Existed_Raid, state configuring 00:15:00.826 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:00.826 [2024-07-11 02:20:51.227306] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.826 [2024-07-11 02:20:51.228708] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.826 [2024-07-11 02:20:51.228741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.085 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.345 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.345 "name": "Existed_Raid", 00:15:01.345 "uuid": "822b1fc3-d730-4bab-adb8-adcc3e1e84bf", 00:15:01.345 "strip_size_kb": 64, 00:15:01.345 "state": "configuring", 00:15:01.345 "raid_level": "concat", 00:15:01.345 "superblock": true, 00:15:01.345 "num_base_bdevs": 2, 00:15:01.345 "num_base_bdevs_discovered": 1, 00:15:01.345 "num_base_bdevs_operational": 2, 00:15:01.345 "base_bdevs_list": [ 00:15:01.345 { 00:15:01.345 "name": "BaseBdev1", 00:15:01.345 "uuid": "c9cd9850-5704-4f61-b947-872f03c73dea", 00:15:01.345 "is_configured": true, 00:15:01.345 "data_offset": 2048, 00:15:01.345 "data_size": 63488 00:15:01.345 }, 00:15:01.345 { 00:15:01.345 "name": "BaseBdev2", 00:15:01.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.345 "is_configured": false, 00:15:01.345 "data_offset": 0, 00:15:01.345 "data_size": 0 00:15:01.345 } 00:15:01.345 ] 00:15:01.345 }' 00:15:01.345 02:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.345 02:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.912 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.171 [2024-07-11 02:20:52.353580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.171 [2024-07-11 02:20:52.353722] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2566b70 00:15:02.171 [2024-07-11 02:20:52.353736] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.171 [2024-07-11 02:20:52.353918] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23b67a0 00:15:02.171 [2024-07-11 02:20:52.354033] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2566b70 00:15:02.171 [2024-07-11 02:20:52.354043] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2566b70 00:15:02.171 [2024-07-11 02:20:52.354134] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.171 BaseBdev2 00:15:02.171 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:02.172 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:02.172 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.172 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:02.172 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.172 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.172 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.430 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.688 [ 00:15:02.688 { 00:15:02.688 "name": "BaseBdev2", 00:15:02.688 "aliases": [ 00:15:02.688 "bb926919-b323-4abb-973b-a70c5b1f2c9b" 00:15:02.688 ], 00:15:02.688 "product_name": "Malloc disk", 00:15:02.688 "block_size": 512, 00:15:02.688 "num_blocks": 65536, 00:15:02.688 "uuid": "bb926919-b323-4abb-973b-a70c5b1f2c9b", 00:15:02.688 "assigned_rate_limits": { 00:15:02.688 "rw_ios_per_sec": 0, 00:15:02.688 "rw_mbytes_per_sec": 0, 00:15:02.688 "r_mbytes_per_sec": 0, 00:15:02.688 "w_mbytes_per_sec": 0 00:15:02.688 }, 00:15:02.688 "claimed": true, 00:15:02.688 "claim_type": "exclusive_write", 00:15:02.688 "zoned": false, 00:15:02.688 "supported_io_types": { 00:15:02.688 "read": true, 00:15:02.688 "write": true, 00:15:02.688 "unmap": true, 00:15:02.688 "flush": true, 00:15:02.688 "reset": true, 00:15:02.688 "nvme_admin": false, 00:15:02.688 "nvme_io": false, 00:15:02.688 "nvme_io_md": false, 00:15:02.688 "write_zeroes": true, 00:15:02.688 "zcopy": true, 00:15:02.688 "get_zone_info": false, 00:15:02.688 "zone_management": false, 00:15:02.688 "zone_append": false, 00:15:02.688 "compare": false, 00:15:02.688 "compare_and_write": false, 00:15:02.688 "abort": true, 00:15:02.688 "seek_hole": false, 00:15:02.688 "seek_data": false, 00:15:02.688 "copy": true, 00:15:02.688 "nvme_iov_md": false 00:15:02.688 }, 00:15:02.688 "memory_domains": [ 00:15:02.688 { 00:15:02.688 "dma_device_id": "system", 00:15:02.688 "dma_device_type": 1 00:15:02.688 }, 00:15:02.688 { 00:15:02.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.688 "dma_device_type": 2 00:15:02.688 } 00:15:02.688 ], 00:15:02.688 "driver_specific": {} 00:15:02.688 } 00:15:02.688 ] 00:15:02.688 02:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:02.688 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:02.688 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:02.688 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:02.688 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.689 02:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.947 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.947 "name": "Existed_Raid", 00:15:02.947 "uuid": "822b1fc3-d730-4bab-adb8-adcc3e1e84bf", 00:15:02.947 "strip_size_kb": 64, 00:15:02.947 "state": "online", 00:15:02.947 "raid_level": "concat", 00:15:02.947 "superblock": true, 00:15:02.947 "num_base_bdevs": 2, 00:15:02.947 "num_base_bdevs_discovered": 2, 00:15:02.947 "num_base_bdevs_operational": 2, 00:15:02.947 "base_bdevs_list": [ 00:15:02.947 { 00:15:02.947 "name": "BaseBdev1", 00:15:02.947 "uuid": "c9cd9850-5704-4f61-b947-872f03c73dea", 00:15:02.947 "is_configured": true, 00:15:02.947 "data_offset": 2048, 00:15:02.947 "data_size": 63488 00:15:02.947 }, 00:15:02.947 { 00:15:02.947 "name": "BaseBdev2", 00:15:02.947 "uuid": "bb926919-b323-4abb-973b-a70c5b1f2c9b", 00:15:02.947 "is_configured": true, 00:15:02.947 "data_offset": 2048, 00:15:02.947 "data_size": 63488 00:15:02.947 } 00:15:02.947 ] 00:15:02.947 }' 00:15:02.947 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.947 02:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.515 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.515 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:03.515 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:03.515 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:03.515 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:03.515 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:03.516 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:03.516 02:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:03.775 [2024-07-11 02:20:53.998210] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.775 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:03.775 "name": "Existed_Raid", 00:15:03.775 "aliases": [ 00:15:03.775 "822b1fc3-d730-4bab-adb8-adcc3e1e84bf" 00:15:03.775 ], 00:15:03.775 "product_name": "Raid Volume", 00:15:03.775 "block_size": 512, 00:15:03.775 "num_blocks": 126976, 00:15:03.775 "uuid": "822b1fc3-d730-4bab-adb8-adcc3e1e84bf", 00:15:03.775 "assigned_rate_limits": { 00:15:03.775 "rw_ios_per_sec": 0, 00:15:03.775 "rw_mbytes_per_sec": 0, 00:15:03.775 "r_mbytes_per_sec": 0, 00:15:03.775 "w_mbytes_per_sec": 0 00:15:03.775 }, 00:15:03.775 "claimed": false, 00:15:03.775 "zoned": false, 00:15:03.775 "supported_io_types": { 00:15:03.775 "read": true, 00:15:03.775 "write": true, 00:15:03.775 "unmap": true, 00:15:03.775 "flush": true, 00:15:03.775 "reset": true, 00:15:03.775 "nvme_admin": false, 00:15:03.775 "nvme_io": false, 00:15:03.775 "nvme_io_md": false, 00:15:03.775 "write_zeroes": true, 00:15:03.775 "zcopy": false, 00:15:03.775 "get_zone_info": false, 00:15:03.775 "zone_management": false, 00:15:03.775 "zone_append": false, 00:15:03.775 "compare": false, 00:15:03.775 "compare_and_write": false, 00:15:03.775 "abort": false, 00:15:03.775 "seek_hole": false, 00:15:03.775 "seek_data": false, 00:15:03.775 "copy": false, 00:15:03.775 "nvme_iov_md": false 00:15:03.775 }, 00:15:03.775 "memory_domains": [ 00:15:03.775 { 00:15:03.775 "dma_device_id": "system", 00:15:03.775 "dma_device_type": 1 00:15:03.775 }, 00:15:03.775 { 00:15:03.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.775 "dma_device_type": 2 00:15:03.775 }, 00:15:03.775 { 00:15:03.775 "dma_device_id": "system", 00:15:03.775 "dma_device_type": 1 00:15:03.775 }, 00:15:03.775 { 00:15:03.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.775 "dma_device_type": 2 00:15:03.775 } 00:15:03.775 ], 00:15:03.775 "driver_specific": { 00:15:03.775 "raid": { 00:15:03.775 "uuid": "822b1fc3-d730-4bab-adb8-adcc3e1e84bf", 00:15:03.775 "strip_size_kb": 64, 00:15:03.775 "state": "online", 00:15:03.775 "raid_level": "concat", 00:15:03.775 "superblock": true, 00:15:03.775 "num_base_bdevs": 2, 00:15:03.775 "num_base_bdevs_discovered": 2, 00:15:03.775 "num_base_bdevs_operational": 2, 00:15:03.775 "base_bdevs_list": [ 00:15:03.775 { 00:15:03.775 "name": "BaseBdev1", 00:15:03.775 "uuid": "c9cd9850-5704-4f61-b947-872f03c73dea", 00:15:03.775 "is_configured": true, 00:15:03.775 "data_offset": 2048, 00:15:03.775 "data_size": 63488 00:15:03.775 }, 00:15:03.775 { 00:15:03.775 "name": "BaseBdev2", 00:15:03.775 "uuid": "bb926919-b323-4abb-973b-a70c5b1f2c9b", 00:15:03.775 "is_configured": true, 00:15:03.775 "data_offset": 2048, 00:15:03.775 "data_size": 63488 00:15:03.775 } 00:15:03.775 ] 00:15:03.775 } 00:15:03.775 } 00:15:03.775 }' 00:15:03.775 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.775 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:03.775 BaseBdev2' 00:15:03.775 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.775 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:03.775 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:04.034 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:04.034 "name": "BaseBdev1", 00:15:04.034 "aliases": [ 00:15:04.034 "c9cd9850-5704-4f61-b947-872f03c73dea" 00:15:04.034 ], 00:15:04.034 "product_name": "Malloc disk", 00:15:04.034 "block_size": 512, 00:15:04.034 "num_blocks": 65536, 00:15:04.034 "uuid": "c9cd9850-5704-4f61-b947-872f03c73dea", 00:15:04.034 "assigned_rate_limits": { 00:15:04.034 "rw_ios_per_sec": 0, 00:15:04.034 "rw_mbytes_per_sec": 0, 00:15:04.034 "r_mbytes_per_sec": 0, 00:15:04.034 "w_mbytes_per_sec": 0 00:15:04.034 }, 00:15:04.034 "claimed": true, 00:15:04.034 "claim_type": "exclusive_write", 00:15:04.034 "zoned": false, 00:15:04.034 "supported_io_types": { 00:15:04.034 "read": true, 00:15:04.034 "write": true, 00:15:04.034 "unmap": true, 00:15:04.034 "flush": true, 00:15:04.034 "reset": true, 00:15:04.034 "nvme_admin": false, 00:15:04.034 "nvme_io": false, 00:15:04.034 "nvme_io_md": false, 00:15:04.034 "write_zeroes": true, 00:15:04.034 "zcopy": true, 00:15:04.034 "get_zone_info": false, 00:15:04.034 "zone_management": false, 00:15:04.034 "zone_append": false, 00:15:04.034 "compare": false, 00:15:04.034 "compare_and_write": false, 00:15:04.034 "abort": true, 00:15:04.034 "seek_hole": false, 00:15:04.034 "seek_data": false, 00:15:04.034 "copy": true, 00:15:04.034 "nvme_iov_md": false 00:15:04.034 }, 00:15:04.034 "memory_domains": [ 00:15:04.034 { 00:15:04.034 "dma_device_id": "system", 00:15:04.034 "dma_device_type": 1 00:15:04.034 }, 00:15:04.034 { 00:15:04.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.034 "dma_device_type": 2 00:15:04.034 } 00:15:04.034 ], 00:15:04.034 "driver_specific": {} 00:15:04.034 }' 00:15:04.034 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.034 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.034 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.034 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:04.292 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:04.551 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:04.551 "name": "BaseBdev2", 00:15:04.551 "aliases": [ 00:15:04.551 "bb926919-b323-4abb-973b-a70c5b1f2c9b" 00:15:04.551 ], 00:15:04.551 "product_name": "Malloc disk", 00:15:04.551 "block_size": 512, 00:15:04.551 "num_blocks": 65536, 00:15:04.551 "uuid": "bb926919-b323-4abb-973b-a70c5b1f2c9b", 00:15:04.551 "assigned_rate_limits": { 00:15:04.551 "rw_ios_per_sec": 0, 00:15:04.551 "rw_mbytes_per_sec": 0, 00:15:04.551 "r_mbytes_per_sec": 0, 00:15:04.551 "w_mbytes_per_sec": 0 00:15:04.551 }, 00:15:04.551 "claimed": true, 00:15:04.551 "claim_type": "exclusive_write", 00:15:04.551 "zoned": false, 00:15:04.551 "supported_io_types": { 00:15:04.551 "read": true, 00:15:04.551 "write": true, 00:15:04.551 "unmap": true, 00:15:04.551 "flush": true, 00:15:04.551 "reset": true, 00:15:04.551 "nvme_admin": false, 00:15:04.551 "nvme_io": false, 00:15:04.551 "nvme_io_md": false, 00:15:04.551 "write_zeroes": true, 00:15:04.551 "zcopy": true, 00:15:04.551 "get_zone_info": false, 00:15:04.551 "zone_management": false, 00:15:04.551 "zone_append": false, 00:15:04.551 "compare": false, 00:15:04.551 "compare_and_write": false, 00:15:04.551 "abort": true, 00:15:04.551 "seek_hole": false, 00:15:04.551 "seek_data": false, 00:15:04.551 "copy": true, 00:15:04.551 "nvme_iov_md": false 00:15:04.551 }, 00:15:04.551 "memory_domains": [ 00:15:04.551 { 00:15:04.551 "dma_device_id": "system", 00:15:04.551 "dma_device_type": 1 00:15:04.551 }, 00:15:04.551 { 00:15:04.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.551 "dma_device_type": 2 00:15:04.551 } 00:15:04.551 ], 00:15:04.551 "driver_specific": {} 00:15:04.551 }' 00:15:04.551 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.810 02:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.810 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.072 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.072 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.072 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:05.429 [2024-07-11 02:20:55.522065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.429 [2024-07-11 02:20:55.522090] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.429 [2024-07-11 02:20:55.522127] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.429 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.429 "name": "Existed_Raid", 00:15:05.429 "uuid": "822b1fc3-d730-4bab-adb8-adcc3e1e84bf", 00:15:05.429 "strip_size_kb": 64, 00:15:05.429 "state": "offline", 00:15:05.429 "raid_level": "concat", 00:15:05.429 "superblock": true, 00:15:05.430 "num_base_bdevs": 2, 00:15:05.430 "num_base_bdevs_discovered": 1, 00:15:05.430 "num_base_bdevs_operational": 1, 00:15:05.430 "base_bdevs_list": [ 00:15:05.430 { 00:15:05.430 "name": null, 00:15:05.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.430 "is_configured": false, 00:15:05.430 "data_offset": 2048, 00:15:05.430 "data_size": 63488 00:15:05.430 }, 00:15:05.430 { 00:15:05.430 "name": "BaseBdev2", 00:15:05.430 "uuid": "bb926919-b323-4abb-973b-a70c5b1f2c9b", 00:15:05.430 "is_configured": true, 00:15:05.430 "data_offset": 2048, 00:15:05.430 "data_size": 63488 00:15:05.430 } 00:15:05.430 ] 00:15:05.430 }' 00:15:05.430 02:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.430 02:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.997 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:05.997 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:05.997 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:05.997 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.255 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:06.255 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.255 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:06.514 [2024-07-11 02:20:56.890684] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.514 [2024-07-11 02:20:56.890729] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2566b70 name Existed_Raid, state offline 00:15:06.514 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:06.514 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:06.514 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.514 02:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1899801 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1899801 ']' 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1899801 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.773 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1899801 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1899801' 00:15:07.032 killing process with pid 1899801 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1899801 00:15:07.032 [2024-07-11 02:20:57.226353] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1899801 00:15:07.032 [2024-07-11 02:20:57.227305] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:07.032 00:15:07.032 real 0m10.752s 00:15:07.032 user 0m19.045s 00:15:07.032 sys 0m2.079s 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.032 02:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.032 ************************************ 00:15:07.032 END TEST raid_state_function_test_sb 00:15:07.032 ************************************ 00:15:07.291 02:20:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:07.291 02:20:57 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:07.291 02:20:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:07.291 02:20:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.291 02:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.291 ************************************ 00:15:07.291 START TEST raid_superblock_test 00:15:07.291 ************************************ 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1901572 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1901572 /var/tmp/spdk-raid.sock 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1901572 ']' 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:07.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.291 02:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.291 [2024-07-11 02:20:57.587893] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:15:07.292 [2024-07-11 02:20:57.587965] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901572 ] 00:15:07.550 [2024-07-11 02:20:57.726189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.550 [2024-07-11 02:20:57.779146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.551 [2024-07-11 02:20:57.839553] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.551 [2024-07-11 02:20:57.839581] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.118 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:08.378 malloc1 00:15:08.378 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.636 [2024-07-11 02:20:58.949781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.636 [2024-07-11 02:20:58.949828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.636 [2024-07-11 02:20:58.949847] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15e6de0 00:15:08.636 [2024-07-11 02:20:58.949860] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.636 [2024-07-11 02:20:58.951383] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.636 [2024-07-11 02:20:58.951410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.636 pt1 00:15:08.636 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:08.636 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:08.637 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:08.637 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:08.637 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.637 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.637 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.637 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.637 02:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:08.922 malloc2 00:15:08.922 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.180 [2024-07-11 02:20:59.455749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.180 [2024-07-11 02:20:59.455793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.180 [2024-07-11 02:20:59.455809] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15de380 00:15:09.180 [2024-07-11 02:20:59.455821] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.180 [2024-07-11 02:20:59.457129] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.180 [2024-07-11 02:20:59.457153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.180 pt2 00:15:09.180 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:09.180 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:09.180 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:09.437 [2024-07-11 02:20:59.704438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.437 [2024-07-11 02:20:59.705661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.437 [2024-07-11 02:20:59.705805] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15e89e0 00:15:09.437 [2024-07-11 02:20:59.705817] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:09.437 [2024-07-11 02:20:59.706001] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15dfa70 00:15:09.437 [2024-07-11 02:20:59.706136] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15e89e0 00:15:09.437 [2024-07-11 02:20:59.706146] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15e89e0 00:15:09.437 [2024-07-11 02:20:59.706236] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.437 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.695 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.695 "name": "raid_bdev1", 00:15:09.695 "uuid": "40d6d0f2-e68b-4add-a835-f4ac35ae34fd", 00:15:09.695 "strip_size_kb": 64, 00:15:09.695 "state": "online", 00:15:09.695 "raid_level": "concat", 00:15:09.695 "superblock": true, 00:15:09.695 "num_base_bdevs": 2, 00:15:09.695 "num_base_bdevs_discovered": 2, 00:15:09.695 "num_base_bdevs_operational": 2, 00:15:09.695 "base_bdevs_list": [ 00:15:09.695 { 00:15:09.695 "name": "pt1", 00:15:09.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.695 "is_configured": true, 00:15:09.695 "data_offset": 2048, 00:15:09.695 "data_size": 63488 00:15:09.695 }, 00:15:09.695 { 00:15:09.695 "name": "pt2", 00:15:09.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.695 "is_configured": true, 00:15:09.695 "data_offset": 2048, 00:15:09.695 "data_size": 63488 00:15:09.695 } 00:15:09.695 ] 00:15:09.695 }' 00:15:09.695 02:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.695 02:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.262 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:10.520 [2024-07-11 02:21:00.779522] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.520 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:10.520 "name": "raid_bdev1", 00:15:10.520 "aliases": [ 00:15:10.520 "40d6d0f2-e68b-4add-a835-f4ac35ae34fd" 00:15:10.520 ], 00:15:10.520 "product_name": "Raid Volume", 00:15:10.520 "block_size": 512, 00:15:10.520 "num_blocks": 126976, 00:15:10.520 "uuid": "40d6d0f2-e68b-4add-a835-f4ac35ae34fd", 00:15:10.520 "assigned_rate_limits": { 00:15:10.520 "rw_ios_per_sec": 0, 00:15:10.520 "rw_mbytes_per_sec": 0, 00:15:10.520 "r_mbytes_per_sec": 0, 00:15:10.520 "w_mbytes_per_sec": 0 00:15:10.520 }, 00:15:10.520 "claimed": false, 00:15:10.520 "zoned": false, 00:15:10.520 "supported_io_types": { 00:15:10.520 "read": true, 00:15:10.520 "write": true, 00:15:10.520 "unmap": true, 00:15:10.520 "flush": true, 00:15:10.520 "reset": true, 00:15:10.520 "nvme_admin": false, 00:15:10.520 "nvme_io": false, 00:15:10.520 "nvme_io_md": false, 00:15:10.520 "write_zeroes": true, 00:15:10.520 "zcopy": false, 00:15:10.520 "get_zone_info": false, 00:15:10.520 "zone_management": false, 00:15:10.520 "zone_append": false, 00:15:10.520 "compare": false, 00:15:10.520 "compare_and_write": false, 00:15:10.520 "abort": false, 00:15:10.520 "seek_hole": false, 00:15:10.520 "seek_data": false, 00:15:10.520 "copy": false, 00:15:10.520 "nvme_iov_md": false 00:15:10.520 }, 00:15:10.520 "memory_domains": [ 00:15:10.520 { 00:15:10.520 "dma_device_id": "system", 00:15:10.520 "dma_device_type": 1 00:15:10.520 }, 00:15:10.520 { 00:15:10.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.520 "dma_device_type": 2 00:15:10.520 }, 00:15:10.520 { 00:15:10.520 "dma_device_id": "system", 00:15:10.520 "dma_device_type": 1 00:15:10.520 }, 00:15:10.520 { 00:15:10.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.520 "dma_device_type": 2 00:15:10.520 } 00:15:10.520 ], 00:15:10.520 "driver_specific": { 00:15:10.520 "raid": { 00:15:10.520 "uuid": "40d6d0f2-e68b-4add-a835-f4ac35ae34fd", 00:15:10.520 "strip_size_kb": 64, 00:15:10.520 "state": "online", 00:15:10.520 "raid_level": "concat", 00:15:10.520 "superblock": true, 00:15:10.520 "num_base_bdevs": 2, 00:15:10.520 "num_base_bdevs_discovered": 2, 00:15:10.520 "num_base_bdevs_operational": 2, 00:15:10.520 "base_bdevs_list": [ 00:15:10.520 { 00:15:10.520 "name": "pt1", 00:15:10.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.520 "is_configured": true, 00:15:10.520 "data_offset": 2048, 00:15:10.520 "data_size": 63488 00:15:10.520 }, 00:15:10.520 { 00:15:10.520 "name": "pt2", 00:15:10.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.520 "is_configured": true, 00:15:10.520 "data_offset": 2048, 00:15:10.520 "data_size": 63488 00:15:10.520 } 00:15:10.520 ] 00:15:10.520 } 00:15:10.520 } 00:15:10.520 }' 00:15:10.520 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.521 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:10.521 pt2' 00:15:10.521 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.521 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:10.521 02:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:10.779 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:10.779 "name": "pt1", 00:15:10.779 "aliases": [ 00:15:10.779 "00000000-0000-0000-0000-000000000001" 00:15:10.779 ], 00:15:10.779 "product_name": "passthru", 00:15:10.779 "block_size": 512, 00:15:10.779 "num_blocks": 65536, 00:15:10.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.779 "assigned_rate_limits": { 00:15:10.779 "rw_ios_per_sec": 0, 00:15:10.779 "rw_mbytes_per_sec": 0, 00:15:10.779 "r_mbytes_per_sec": 0, 00:15:10.779 "w_mbytes_per_sec": 0 00:15:10.779 }, 00:15:10.779 "claimed": true, 00:15:10.779 "claim_type": "exclusive_write", 00:15:10.779 "zoned": false, 00:15:10.779 "supported_io_types": { 00:15:10.779 "read": true, 00:15:10.779 "write": true, 00:15:10.779 "unmap": true, 00:15:10.779 "flush": true, 00:15:10.779 "reset": true, 00:15:10.779 "nvme_admin": false, 00:15:10.779 "nvme_io": false, 00:15:10.779 "nvme_io_md": false, 00:15:10.779 "write_zeroes": true, 00:15:10.779 "zcopy": true, 00:15:10.779 "get_zone_info": false, 00:15:10.779 "zone_management": false, 00:15:10.779 "zone_append": false, 00:15:10.779 "compare": false, 00:15:10.779 "compare_and_write": false, 00:15:10.779 "abort": true, 00:15:10.779 "seek_hole": false, 00:15:10.779 "seek_data": false, 00:15:10.779 "copy": true, 00:15:10.779 "nvme_iov_md": false 00:15:10.779 }, 00:15:10.779 "memory_domains": [ 00:15:10.779 { 00:15:10.779 "dma_device_id": "system", 00:15:10.779 "dma_device_type": 1 00:15:10.779 }, 00:15:10.779 { 00:15:10.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.779 "dma_device_type": 2 00:15:10.779 } 00:15:10.779 ], 00:15:10.779 "driver_specific": { 00:15:10.779 "passthru": { 00:15:10.779 "name": "pt1", 00:15:10.779 "base_bdev_name": "malloc1" 00:15:10.779 } 00:15:10.779 } 00:15:10.779 }' 00:15:10.779 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.779 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.779 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:10.779 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.038 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:11.039 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.298 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.298 "name": "pt2", 00:15:11.298 "aliases": [ 00:15:11.298 "00000000-0000-0000-0000-000000000002" 00:15:11.298 ], 00:15:11.298 "product_name": "passthru", 00:15:11.298 "block_size": 512, 00:15:11.298 "num_blocks": 65536, 00:15:11.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.298 "assigned_rate_limits": { 00:15:11.298 "rw_ios_per_sec": 0, 00:15:11.298 "rw_mbytes_per_sec": 0, 00:15:11.298 "r_mbytes_per_sec": 0, 00:15:11.298 "w_mbytes_per_sec": 0 00:15:11.298 }, 00:15:11.298 "claimed": true, 00:15:11.298 "claim_type": "exclusive_write", 00:15:11.298 "zoned": false, 00:15:11.298 "supported_io_types": { 00:15:11.298 "read": true, 00:15:11.298 "write": true, 00:15:11.298 "unmap": true, 00:15:11.298 "flush": true, 00:15:11.298 "reset": true, 00:15:11.298 "nvme_admin": false, 00:15:11.298 "nvme_io": false, 00:15:11.298 "nvme_io_md": false, 00:15:11.298 "write_zeroes": true, 00:15:11.298 "zcopy": true, 00:15:11.298 "get_zone_info": false, 00:15:11.298 "zone_management": false, 00:15:11.298 "zone_append": false, 00:15:11.298 "compare": false, 00:15:11.298 "compare_and_write": false, 00:15:11.298 "abort": true, 00:15:11.298 "seek_hole": false, 00:15:11.298 "seek_data": false, 00:15:11.298 "copy": true, 00:15:11.298 "nvme_iov_md": false 00:15:11.298 }, 00:15:11.298 "memory_domains": [ 00:15:11.298 { 00:15:11.298 "dma_device_id": "system", 00:15:11.298 "dma_device_type": 1 00:15:11.298 }, 00:15:11.298 { 00:15:11.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.298 "dma_device_type": 2 00:15:11.298 } 00:15:11.298 ], 00:15:11.298 "driver_specific": { 00:15:11.298 "passthru": { 00:15:11.298 "name": "pt2", 00:15:11.298 "base_bdev_name": "malloc2" 00:15:11.298 } 00:15:11.298 } 00:15:11.298 }' 00:15:11.298 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.557 02:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.816 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.816 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.816 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:11.816 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:12.074 [2024-07-11 02:21:02.275481] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.074 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=40d6d0f2-e68b-4add-a835-f4ac35ae34fd 00:15:12.074 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 40d6d0f2-e68b-4add-a835-f4ac35ae34fd ']' 00:15:12.074 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:12.333 [2024-07-11 02:21:02.519890] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.333 [2024-07-11 02:21:02.519913] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.333 [2024-07-11 02:21:02.519970] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.333 [2024-07-11 02:21:02.520016] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.333 [2024-07-11 02:21:02.520027] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15e89e0 name raid_bdev1, state offline 00:15:12.333 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.333 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:12.592 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:12.592 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:12.592 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.592 02:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:12.851 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:12.851 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:12.851 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:12.851 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:13.109 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:13.368 [2024-07-11 02:21:03.747114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:13.368 [2024-07-11 02:21:03.748437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:13.368 [2024-07-11 02:21:03.748493] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:13.368 [2024-07-11 02:21:03.748532] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:13.368 [2024-07-11 02:21:03.748550] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.368 [2024-07-11 02:21:03.748559] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15e1450 name raid_bdev1, state configuring 00:15:13.368 request: 00:15:13.368 { 00:15:13.368 "name": "raid_bdev1", 00:15:13.368 "raid_level": "concat", 00:15:13.368 "base_bdevs": [ 00:15:13.368 "malloc1", 00:15:13.368 "malloc2" 00:15:13.368 ], 00:15:13.368 "strip_size_kb": 64, 00:15:13.368 "superblock": false, 00:15:13.368 "method": "bdev_raid_create", 00:15:13.368 "req_id": 1 00:15:13.368 } 00:15:13.368 Got JSON-RPC error response 00:15:13.368 response: 00:15:13.368 { 00:15:13.368 "code": -17, 00:15:13.368 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:13.368 } 00:15:13.368 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:13.368 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.368 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.368 02:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.368 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:13.368 02:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.936 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:13.936 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:13.936 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.195 [2024-07-11 02:21:04.517065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.195 [2024-07-11 02:21:04.517119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.195 [2024-07-11 02:21:04.517142] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15dfbd0 00:15:14.195 [2024-07-11 02:21:04.517156] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.195 [2024-07-11 02:21:04.518792] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.195 [2024-07-11 02:21:04.518831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.195 [2024-07-11 02:21:04.518917] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:14.195 [2024-07-11 02:21:04.518945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.195 pt1 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.195 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.454 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.454 "name": "raid_bdev1", 00:15:14.454 "uuid": "40d6d0f2-e68b-4add-a835-f4ac35ae34fd", 00:15:14.454 "strip_size_kb": 64, 00:15:14.454 "state": "configuring", 00:15:14.454 "raid_level": "concat", 00:15:14.454 "superblock": true, 00:15:14.454 "num_base_bdevs": 2, 00:15:14.454 "num_base_bdevs_discovered": 1, 00:15:14.454 "num_base_bdevs_operational": 2, 00:15:14.454 "base_bdevs_list": [ 00:15:14.454 { 00:15:14.454 "name": "pt1", 00:15:14.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:14.454 "is_configured": true, 00:15:14.454 "data_offset": 2048, 00:15:14.454 "data_size": 63488 00:15:14.454 }, 00:15:14.454 { 00:15:14.454 "name": null, 00:15:14.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.454 "is_configured": false, 00:15:14.454 "data_offset": 2048, 00:15:14.454 "data_size": 63488 00:15:14.454 } 00:15:14.454 ] 00:15:14.454 }' 00:15:14.454 02:21:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.454 02:21:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.021 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:15.021 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:15.021 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:15.021 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.280 [2024-07-11 02:21:05.636032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.280 [2024-07-11 02:21:05.636091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.280 [2024-07-11 02:21:05.636113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15e3270 00:15:15.280 [2024-07-11 02:21:05.636126] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.280 [2024-07-11 02:21:05.636473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.280 [2024-07-11 02:21:05.636499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.280 [2024-07-11 02:21:05.636566] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:15.280 [2024-07-11 02:21:05.636585] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.280 [2024-07-11 02:21:05.636679] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1433900 00:15:15.280 [2024-07-11 02:21:05.636689] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.280 [2024-07-11 02:21:05.636864] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14cc360 00:15:15.280 [2024-07-11 02:21:05.636985] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1433900 00:15:15.280 [2024-07-11 02:21:05.636994] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1433900 00:15:15.280 [2024-07-11 02:21:05.637093] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.280 pt2 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.280 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.281 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.540 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.540 "name": "raid_bdev1", 00:15:15.540 "uuid": "40d6d0f2-e68b-4add-a835-f4ac35ae34fd", 00:15:15.540 "strip_size_kb": 64, 00:15:15.540 "state": "online", 00:15:15.540 "raid_level": "concat", 00:15:15.540 "superblock": true, 00:15:15.540 "num_base_bdevs": 2, 00:15:15.540 "num_base_bdevs_discovered": 2, 00:15:15.540 "num_base_bdevs_operational": 2, 00:15:15.540 "base_bdevs_list": [ 00:15:15.540 { 00:15:15.540 "name": "pt1", 00:15:15.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.540 "is_configured": true, 00:15:15.540 "data_offset": 2048, 00:15:15.540 "data_size": 63488 00:15:15.540 }, 00:15:15.540 { 00:15:15.540 "name": "pt2", 00:15:15.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.540 "is_configured": true, 00:15:15.540 "data_offset": 2048, 00:15:15.540 "data_size": 63488 00:15:15.540 } 00:15:15.540 ] 00:15:15.540 }' 00:15:15.540 02:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.540 02:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:16.107 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:16.366 [2024-07-11 02:21:06.739208] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.366 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:16.366 "name": "raid_bdev1", 00:15:16.366 "aliases": [ 00:15:16.366 "40d6d0f2-e68b-4add-a835-f4ac35ae34fd" 00:15:16.366 ], 00:15:16.366 "product_name": "Raid Volume", 00:15:16.366 "block_size": 512, 00:15:16.366 "num_blocks": 126976, 00:15:16.366 "uuid": "40d6d0f2-e68b-4add-a835-f4ac35ae34fd", 00:15:16.366 "assigned_rate_limits": { 00:15:16.366 "rw_ios_per_sec": 0, 00:15:16.366 "rw_mbytes_per_sec": 0, 00:15:16.366 "r_mbytes_per_sec": 0, 00:15:16.366 "w_mbytes_per_sec": 0 00:15:16.366 }, 00:15:16.366 "claimed": false, 00:15:16.366 "zoned": false, 00:15:16.366 "supported_io_types": { 00:15:16.366 "read": true, 00:15:16.366 "write": true, 00:15:16.366 "unmap": true, 00:15:16.366 "flush": true, 00:15:16.366 "reset": true, 00:15:16.366 "nvme_admin": false, 00:15:16.366 "nvme_io": false, 00:15:16.366 "nvme_io_md": false, 00:15:16.366 "write_zeroes": true, 00:15:16.366 "zcopy": false, 00:15:16.366 "get_zone_info": false, 00:15:16.366 "zone_management": false, 00:15:16.366 "zone_append": false, 00:15:16.366 "compare": false, 00:15:16.366 "compare_and_write": false, 00:15:16.366 "abort": false, 00:15:16.366 "seek_hole": false, 00:15:16.366 "seek_data": false, 00:15:16.366 "copy": false, 00:15:16.366 "nvme_iov_md": false 00:15:16.366 }, 00:15:16.366 "memory_domains": [ 00:15:16.366 { 00:15:16.366 "dma_device_id": "system", 00:15:16.366 "dma_device_type": 1 00:15:16.366 }, 00:15:16.366 { 00:15:16.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.366 "dma_device_type": 2 00:15:16.366 }, 00:15:16.366 { 00:15:16.366 "dma_device_id": "system", 00:15:16.366 "dma_device_type": 1 00:15:16.366 }, 00:15:16.366 { 00:15:16.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.366 "dma_device_type": 2 00:15:16.366 } 00:15:16.366 ], 00:15:16.366 "driver_specific": { 00:15:16.366 "raid": { 00:15:16.366 "uuid": "40d6d0f2-e68b-4add-a835-f4ac35ae34fd", 00:15:16.366 "strip_size_kb": 64, 00:15:16.366 "state": "online", 00:15:16.366 "raid_level": "concat", 00:15:16.366 "superblock": true, 00:15:16.366 "num_base_bdevs": 2, 00:15:16.366 "num_base_bdevs_discovered": 2, 00:15:16.366 "num_base_bdevs_operational": 2, 00:15:16.366 "base_bdevs_list": [ 00:15:16.366 { 00:15:16.366 "name": "pt1", 00:15:16.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.366 "is_configured": true, 00:15:16.366 "data_offset": 2048, 00:15:16.366 "data_size": 63488 00:15:16.366 }, 00:15:16.366 { 00:15:16.366 "name": "pt2", 00:15:16.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.366 "is_configured": true, 00:15:16.366 "data_offset": 2048, 00:15:16.366 "data_size": 63488 00:15:16.366 } 00:15:16.366 ] 00:15:16.366 } 00:15:16.366 } 00:15:16.366 }' 00:15:16.366 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.625 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:16.625 pt2' 00:15:16.625 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:16.625 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:16.625 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:16.625 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:16.625 "name": "pt1", 00:15:16.625 "aliases": [ 00:15:16.625 "00000000-0000-0000-0000-000000000001" 00:15:16.625 ], 00:15:16.625 "product_name": "passthru", 00:15:16.625 "block_size": 512, 00:15:16.625 "num_blocks": 65536, 00:15:16.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.625 "assigned_rate_limits": { 00:15:16.625 "rw_ios_per_sec": 0, 00:15:16.625 "rw_mbytes_per_sec": 0, 00:15:16.625 "r_mbytes_per_sec": 0, 00:15:16.625 "w_mbytes_per_sec": 0 00:15:16.625 }, 00:15:16.625 "claimed": true, 00:15:16.625 "claim_type": "exclusive_write", 00:15:16.625 "zoned": false, 00:15:16.625 "supported_io_types": { 00:15:16.625 "read": true, 00:15:16.625 "write": true, 00:15:16.625 "unmap": true, 00:15:16.625 "flush": true, 00:15:16.625 "reset": true, 00:15:16.625 "nvme_admin": false, 00:15:16.625 "nvme_io": false, 00:15:16.625 "nvme_io_md": false, 00:15:16.625 "write_zeroes": true, 00:15:16.625 "zcopy": true, 00:15:16.625 "get_zone_info": false, 00:15:16.625 "zone_management": false, 00:15:16.625 "zone_append": false, 00:15:16.625 "compare": false, 00:15:16.625 "compare_and_write": false, 00:15:16.625 "abort": true, 00:15:16.625 "seek_hole": false, 00:15:16.625 "seek_data": false, 00:15:16.625 "copy": true, 00:15:16.625 "nvme_iov_md": false 00:15:16.625 }, 00:15:16.625 "memory_domains": [ 00:15:16.625 { 00:15:16.625 "dma_device_id": "system", 00:15:16.625 "dma_device_type": 1 00:15:16.625 }, 00:15:16.625 { 00:15:16.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.625 "dma_device_type": 2 00:15:16.625 } 00:15:16.625 ], 00:15:16.625 "driver_specific": { 00:15:16.625 "passthru": { 00:15:16.625 "name": "pt1", 00:15:16.625 "base_bdev_name": "malloc1" 00:15:16.625 } 00:15:16.625 } 00:15:16.625 }' 00:15:16.625 02:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.625 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.884 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.143 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.143 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.143 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:17.143 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.143 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.143 "name": "pt2", 00:15:17.143 "aliases": [ 00:15:17.143 "00000000-0000-0000-0000-000000000002" 00:15:17.143 ], 00:15:17.143 "product_name": "passthru", 00:15:17.143 "block_size": 512, 00:15:17.143 "num_blocks": 65536, 00:15:17.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.143 "assigned_rate_limits": { 00:15:17.143 "rw_ios_per_sec": 0, 00:15:17.143 "rw_mbytes_per_sec": 0, 00:15:17.143 "r_mbytes_per_sec": 0, 00:15:17.143 "w_mbytes_per_sec": 0 00:15:17.143 }, 00:15:17.143 "claimed": true, 00:15:17.143 "claim_type": "exclusive_write", 00:15:17.143 "zoned": false, 00:15:17.143 "supported_io_types": { 00:15:17.143 "read": true, 00:15:17.143 "write": true, 00:15:17.143 "unmap": true, 00:15:17.143 "flush": true, 00:15:17.143 "reset": true, 00:15:17.143 "nvme_admin": false, 00:15:17.143 "nvme_io": false, 00:15:17.143 "nvme_io_md": false, 00:15:17.143 "write_zeroes": true, 00:15:17.143 "zcopy": true, 00:15:17.143 "get_zone_info": false, 00:15:17.143 "zone_management": false, 00:15:17.143 "zone_append": false, 00:15:17.143 "compare": false, 00:15:17.143 "compare_and_write": false, 00:15:17.143 "abort": true, 00:15:17.143 "seek_hole": false, 00:15:17.143 "seek_data": false, 00:15:17.143 "copy": true, 00:15:17.143 "nvme_iov_md": false 00:15:17.143 }, 00:15:17.143 "memory_domains": [ 00:15:17.143 { 00:15:17.143 "dma_device_id": "system", 00:15:17.143 "dma_device_type": 1 00:15:17.143 }, 00:15:17.143 { 00:15:17.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.143 "dma_device_type": 2 00:15:17.143 } 00:15:17.143 ], 00:15:17.143 "driver_specific": { 00:15:17.143 "passthru": { 00:15:17.143 "name": "pt2", 00:15:17.143 "base_bdev_name": "malloc2" 00:15:17.143 } 00:15:17.143 } 00:15:17.143 }' 00:15:17.143 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.143 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.403 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.662 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.662 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:17.662 02:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:17.662 [2024-07-11 02:21:08.074749] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 40d6d0f2-e68b-4add-a835-f4ac35ae34fd '!=' 40d6d0f2-e68b-4add-a835-f4ac35ae34fd ']' 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1901572 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1901572 ']' 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1901572 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1901572 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1901572' 00:15:17.922 killing process with pid 1901572 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1901572 00:15:17.922 [2024-07-11 02:21:08.151730] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.922 [2024-07-11 02:21:08.151790] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.922 [2024-07-11 02:21:08.151833] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.922 [2024-07-11 02:21:08.151845] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1433900 name raid_bdev1, state offline 00:15:17.922 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1901572 00:15:17.922 [2024-07-11 02:21:08.169578] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.181 02:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:18.181 00:15:18.181 real 0m10.845s 00:15:18.181 user 0m19.361s 00:15:18.181 sys 0m2.028s 00:15:18.181 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.181 02:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.181 ************************************ 00:15:18.181 END TEST raid_superblock_test 00:15:18.181 ************************************ 00:15:18.181 02:21:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:18.181 02:21:08 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:18.181 02:21:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:18.181 02:21:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.181 02:21:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.181 ************************************ 00:15:18.181 START TEST raid_read_error_test 00:15:18.181 ************************************ 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QmPbkM6Idn 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1903712 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1903712 /var/tmp/spdk-raid.sock 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1903712 ']' 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.181 02:21:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.181 [2024-07-11 02:21:08.539706] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:15:18.181 [2024-07-11 02:21:08.539789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903712 ] 00:15:18.441 [2024-07-11 02:21:08.680382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.441 [2024-07-11 02:21:08.733302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.441 [2024-07-11 02:21:08.792653] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.441 [2024-07-11 02:21:08.792686] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.007 02:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.007 02:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:19.007 02:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:19.007 02:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:19.266 BaseBdev1_malloc 00:15:19.266 02:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:19.524 true 00:15:19.524 02:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:19.783 [2024-07-11 02:21:10.137439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:19.783 [2024-07-11 02:21:10.137490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.783 [2024-07-11 02:21:10.137510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28cd330 00:15:19.783 [2024-07-11 02:21:10.137522] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.783 [2024-07-11 02:21:10.139283] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.783 [2024-07-11 02:21:10.139313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.783 BaseBdev1 00:15:19.783 02:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:19.783 02:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.041 BaseBdev2_malloc 00:15:20.041 02:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:20.300 true 00:15:20.300 02:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:20.559 [2024-07-11 02:21:10.892069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:20.559 [2024-07-11 02:21:10.892114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.559 [2024-07-11 02:21:10.892135] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28c6b40 00:15:20.559 [2024-07-11 02:21:10.892147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.559 [2024-07-11 02:21:10.893553] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.559 [2024-07-11 02:21:10.893580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.559 BaseBdev2 00:15:20.559 02:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:20.818 [2024-07-11 02:21:11.140748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.818 [2024-07-11 02:21:11.141920] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.818 [2024-07-11 02:21:11.142095] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x28c7d50 00:15:20.818 [2024-07-11 02:21:11.142108] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.818 [2024-07-11 02:21:11.142289] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28c6150 00:15:20.818 [2024-07-11 02:21:11.142428] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28c7d50 00:15:20.818 [2024-07-11 02:21:11.142438] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28c7d50 00:15:20.818 [2024-07-11 02:21:11.142533] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.818 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.076 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.076 "name": "raid_bdev1", 00:15:21.076 "uuid": "dac3883a-0a6e-408a-80b4-28f0f3e1896f", 00:15:21.076 "strip_size_kb": 64, 00:15:21.076 "state": "online", 00:15:21.076 "raid_level": "concat", 00:15:21.076 "superblock": true, 00:15:21.076 "num_base_bdevs": 2, 00:15:21.076 "num_base_bdevs_discovered": 2, 00:15:21.076 "num_base_bdevs_operational": 2, 00:15:21.076 "base_bdevs_list": [ 00:15:21.076 { 00:15:21.076 "name": "BaseBdev1", 00:15:21.076 "uuid": "a8fcf810-f30a-5f68-8b0e-83be78592f7b", 00:15:21.076 "is_configured": true, 00:15:21.076 "data_offset": 2048, 00:15:21.076 "data_size": 63488 00:15:21.076 }, 00:15:21.076 { 00:15:21.076 "name": "BaseBdev2", 00:15:21.076 "uuid": "04560fee-921f-5d84-aa3f-6759eeb48f74", 00:15:21.076 "is_configured": true, 00:15:21.076 "data_offset": 2048, 00:15:21.076 "data_size": 63488 00:15:21.076 } 00:15:21.076 ] 00:15:21.076 }' 00:15:21.076 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.076 02:21:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.643 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:21.643 02:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:21.902 [2024-07-11 02:21:12.067477] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2719450 00:15:22.840 02:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.840 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.100 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.100 "name": "raid_bdev1", 00:15:23.100 "uuid": "dac3883a-0a6e-408a-80b4-28f0f3e1896f", 00:15:23.100 "strip_size_kb": 64, 00:15:23.100 "state": "online", 00:15:23.100 "raid_level": "concat", 00:15:23.100 "superblock": true, 00:15:23.100 "num_base_bdevs": 2, 00:15:23.100 "num_base_bdevs_discovered": 2, 00:15:23.100 "num_base_bdevs_operational": 2, 00:15:23.100 "base_bdevs_list": [ 00:15:23.100 { 00:15:23.100 "name": "BaseBdev1", 00:15:23.100 "uuid": "a8fcf810-f30a-5f68-8b0e-83be78592f7b", 00:15:23.100 "is_configured": true, 00:15:23.100 "data_offset": 2048, 00:15:23.100 "data_size": 63488 00:15:23.100 }, 00:15:23.100 { 00:15:23.100 "name": "BaseBdev2", 00:15:23.100 "uuid": "04560fee-921f-5d84-aa3f-6759eeb48f74", 00:15:23.100 "is_configured": true, 00:15:23.100 "data_offset": 2048, 00:15:23.100 "data_size": 63488 00:15:23.100 } 00:15:23.100 ] 00:15:23.100 }' 00:15:23.100 02:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.100 02:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.666 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:23.926 [2024-07-11 02:21:14.260477] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.926 [2024-07-11 02:21:14.260520] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.926 [2024-07-11 02:21:14.263688] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.926 [2024-07-11 02:21:14.263721] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.926 [2024-07-11 02:21:14.263750] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.926 [2024-07-11 02:21:14.263767] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28c7d50 name raid_bdev1, state offline 00:15:23.926 0 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1903712 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1903712 ']' 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1903712 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1903712 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1903712' 00:15:23.926 killing process with pid 1903712 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1903712 00:15:23.926 [2024-07-11 02:21:14.335298] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.926 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1903712 00:15:23.926 [2024-07-11 02:21:14.346135] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QmPbkM6Idn 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:15:24.184 00:15:24.184 real 0m6.108s 00:15:24.184 user 0m9.500s 00:15:24.184 sys 0m1.111s 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.184 02:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.184 ************************************ 00:15:24.184 END TEST raid_read_error_test 00:15:24.184 ************************************ 00:15:24.443 02:21:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:24.443 02:21:14 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:24.443 02:21:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:24.443 02:21:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.443 02:21:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.443 ************************************ 00:15:24.443 START TEST raid_write_error_test 00:15:24.443 ************************************ 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:24.443 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.tTQhZT1je4 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1904648 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1904648 /var/tmp/spdk-raid.sock 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1904648 ']' 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:24.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.444 02:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.444 [2024-07-11 02:21:14.731332] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:15:24.444 [2024-07-11 02:21:14.731399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904648 ] 00:15:24.702 [2024-07-11 02:21:14.868453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.702 [2024-07-11 02:21:14.921838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.702 [2024-07-11 02:21:14.984007] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.702 [2024-07-11 02:21:14.984036] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.268 02:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.268 02:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:25.268 02:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:25.268 02:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:25.526 BaseBdev1_malloc 00:15:25.526 02:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:25.784 true 00:15:25.784 02:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:26.043 [2024-07-11 02:21:16.350499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:26.043 [2024-07-11 02:21:16.350544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.043 [2024-07-11 02:21:16.350566] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1160330 00:15:26.043 [2024-07-11 02:21:16.350579] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.043 [2024-07-11 02:21:16.352422] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.043 [2024-07-11 02:21:16.352451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.043 BaseBdev1 00:15:26.043 02:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:26.043 02:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:26.300 BaseBdev2_malloc 00:15:26.300 02:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:26.559 true 00:15:26.559 02:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:26.818 [2024-07-11 02:21:17.094127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:26.818 [2024-07-11 02:21:17.094169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.818 [2024-07-11 02:21:17.094190] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1159b40 00:15:26.818 [2024-07-11 02:21:17.094203] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.818 [2024-07-11 02:21:17.095712] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.818 [2024-07-11 02:21:17.095739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.818 BaseBdev2 00:15:26.818 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:27.077 [2024-07-11 02:21:17.334798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.077 [2024-07-11 02:21:17.336094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.077 [2024-07-11 02:21:17.336272] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x115ad50 00:15:27.077 [2024-07-11 02:21:17.336286] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:27.077 [2024-07-11 02:21:17.336475] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1159150 00:15:27.077 [2024-07-11 02:21:17.336624] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x115ad50 00:15:27.077 [2024-07-11 02:21:17.336634] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x115ad50 00:15:27.077 [2024-07-11 02:21:17.336735] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.077 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.078 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.337 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.337 "name": "raid_bdev1", 00:15:27.337 "uuid": "b839ff50-f234-4489-a77a-c41b1ae8ce11", 00:15:27.337 "strip_size_kb": 64, 00:15:27.337 "state": "online", 00:15:27.337 "raid_level": "concat", 00:15:27.337 "superblock": true, 00:15:27.337 "num_base_bdevs": 2, 00:15:27.337 "num_base_bdevs_discovered": 2, 00:15:27.337 "num_base_bdevs_operational": 2, 00:15:27.337 "base_bdevs_list": [ 00:15:27.337 { 00:15:27.337 "name": "BaseBdev1", 00:15:27.337 "uuid": "5883ee85-2b4b-5a65-acfb-6948e7c81995", 00:15:27.337 "is_configured": true, 00:15:27.337 "data_offset": 2048, 00:15:27.337 "data_size": 63488 00:15:27.337 }, 00:15:27.337 { 00:15:27.337 "name": "BaseBdev2", 00:15:27.337 "uuid": "8a8a2cd1-60b6-57b0-86dd-ee305fbddc83", 00:15:27.337 "is_configured": true, 00:15:27.337 "data_offset": 2048, 00:15:27.337 "data_size": 63488 00:15:27.337 } 00:15:27.337 ] 00:15:27.337 }' 00:15:27.337 02:21:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.337 02:21:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.904 02:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:27.904 02:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:27.904 [2024-07-11 02:21:18.293575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfac450 00:15:28.844 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.123 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.382 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.382 "name": "raid_bdev1", 00:15:29.382 "uuid": "b839ff50-f234-4489-a77a-c41b1ae8ce11", 00:15:29.382 "strip_size_kb": 64, 00:15:29.382 "state": "online", 00:15:29.382 "raid_level": "concat", 00:15:29.382 "superblock": true, 00:15:29.382 "num_base_bdevs": 2, 00:15:29.382 "num_base_bdevs_discovered": 2, 00:15:29.382 "num_base_bdevs_operational": 2, 00:15:29.382 "base_bdevs_list": [ 00:15:29.382 { 00:15:29.382 "name": "BaseBdev1", 00:15:29.382 "uuid": "5883ee85-2b4b-5a65-acfb-6948e7c81995", 00:15:29.382 "is_configured": true, 00:15:29.382 "data_offset": 2048, 00:15:29.382 "data_size": 63488 00:15:29.382 }, 00:15:29.382 { 00:15:29.382 "name": "BaseBdev2", 00:15:29.382 "uuid": "8a8a2cd1-60b6-57b0-86dd-ee305fbddc83", 00:15:29.383 "is_configured": true, 00:15:29.383 "data_offset": 2048, 00:15:29.383 "data_size": 63488 00:15:29.383 } 00:15:29.383 ] 00:15:29.383 }' 00:15:29.383 02:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.383 02:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.950 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:30.210 [2024-07-11 02:21:20.530284] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.210 [2024-07-11 02:21:20.530320] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.210 [2024-07-11 02:21:20.533469] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.210 [2024-07-11 02:21:20.533499] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.210 [2024-07-11 02:21:20.533528] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.210 [2024-07-11 02:21:20.533539] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x115ad50 name raid_bdev1, state offline 00:15:30.210 0 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1904648 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1904648 ']' 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1904648 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1904648 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1904648' 00:15:30.210 killing process with pid 1904648 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1904648 00:15:30.210 [2024-07-11 02:21:20.598415] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.210 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1904648 00:15:30.210 [2024-07-11 02:21:20.608749] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.tTQhZT1je4 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:15:30.469 00:15:30.469 real 0m6.158s 00:15:30.469 user 0m9.606s 00:15:30.469 sys 0m1.126s 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.469 02:21:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.469 ************************************ 00:15:30.469 END TEST raid_write_error_test 00:15:30.469 ************************************ 00:15:30.469 02:21:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:30.469 02:21:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:30.469 02:21:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:30.469 02:21:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:30.470 02:21:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.470 02:21:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.729 ************************************ 00:15:30.729 START TEST raid_state_function_test 00:15:30.729 ************************************ 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1905491 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1905491' 00:15:30.729 Process raid pid: 1905491 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1905491 /var/tmp/spdk-raid.sock 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1905491 ']' 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:30.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.729 02:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.729 [2024-07-11 02:21:20.968814] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:15:30.729 [2024-07-11 02:21:20.968878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.729 [2024-07-11 02:21:21.097079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.729 [2024-07-11 02:21:21.150049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.989 [2024-07-11 02:21:21.217582] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.989 [2024-07-11 02:21:21.217616] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.557 02:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.557 02:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:31.557 02:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:31.816 [2024-07-11 02:21:22.124966] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.816 [2024-07-11 02:21:22.125005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.816 [2024-07-11 02:21:22.125017] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.816 [2024-07-11 02:21:22.125028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.816 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.076 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.076 "name": "Existed_Raid", 00:15:32.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.076 "strip_size_kb": 0, 00:15:32.076 "state": "configuring", 00:15:32.076 "raid_level": "raid1", 00:15:32.076 "superblock": false, 00:15:32.076 "num_base_bdevs": 2, 00:15:32.076 "num_base_bdevs_discovered": 0, 00:15:32.076 "num_base_bdevs_operational": 2, 00:15:32.076 "base_bdevs_list": [ 00:15:32.076 { 00:15:32.076 "name": "BaseBdev1", 00:15:32.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.076 "is_configured": false, 00:15:32.076 "data_offset": 0, 00:15:32.076 "data_size": 0 00:15:32.076 }, 00:15:32.076 { 00:15:32.076 "name": "BaseBdev2", 00:15:32.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.076 "is_configured": false, 00:15:32.076 "data_offset": 0, 00:15:32.076 "data_size": 0 00:15:32.076 } 00:15:32.076 ] 00:15:32.076 }' 00:15:32.076 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.076 02:21:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.644 02:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.903 [2024-07-11 02:21:23.147549] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.903 [2024-07-11 02:21:23.147578] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1600710 name Existed_Raid, state configuring 00:15:32.903 02:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:33.163 [2024-07-11 02:21:23.396220] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.163 [2024-07-11 02:21:23.396255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.163 [2024-07-11 02:21:23.396265] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.163 [2024-07-11 02:21:23.396277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.163 02:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.421 [2024-07-11 02:21:23.658518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.421 BaseBdev1 00:15:33.422 02:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:33.422 02:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:33.422 02:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.422 02:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:33.422 02:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.422 02:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.422 02:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.681 02:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.940 [ 00:15:33.940 { 00:15:33.940 "name": "BaseBdev1", 00:15:33.940 "aliases": [ 00:15:33.940 "05d00a7a-c95b-45b0-9eae-8c2dc475650b" 00:15:33.940 ], 00:15:33.940 "product_name": "Malloc disk", 00:15:33.940 "block_size": 512, 00:15:33.940 "num_blocks": 65536, 00:15:33.940 "uuid": "05d00a7a-c95b-45b0-9eae-8c2dc475650b", 00:15:33.940 "assigned_rate_limits": { 00:15:33.940 "rw_ios_per_sec": 0, 00:15:33.940 "rw_mbytes_per_sec": 0, 00:15:33.940 "r_mbytes_per_sec": 0, 00:15:33.940 "w_mbytes_per_sec": 0 00:15:33.940 }, 00:15:33.940 "claimed": true, 00:15:33.940 "claim_type": "exclusive_write", 00:15:33.940 "zoned": false, 00:15:33.940 "supported_io_types": { 00:15:33.940 "read": true, 00:15:33.940 "write": true, 00:15:33.940 "unmap": true, 00:15:33.940 "flush": true, 00:15:33.940 "reset": true, 00:15:33.940 "nvme_admin": false, 00:15:33.940 "nvme_io": false, 00:15:33.940 "nvme_io_md": false, 00:15:33.940 "write_zeroes": true, 00:15:33.940 "zcopy": true, 00:15:33.940 "get_zone_info": false, 00:15:33.940 "zone_management": false, 00:15:33.940 "zone_append": false, 00:15:33.940 "compare": false, 00:15:33.940 "compare_and_write": false, 00:15:33.940 "abort": true, 00:15:33.940 "seek_hole": false, 00:15:33.940 "seek_data": false, 00:15:33.940 "copy": true, 00:15:33.940 "nvme_iov_md": false 00:15:33.940 }, 00:15:33.940 "memory_domains": [ 00:15:33.940 { 00:15:33.940 "dma_device_id": "system", 00:15:33.940 "dma_device_type": 1 00:15:33.940 }, 00:15:33.940 { 00:15:33.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.940 "dma_device_type": 2 00:15:33.940 } 00:15:33.940 ], 00:15:33.940 "driver_specific": {} 00:15:33.940 } 00:15:33.940 ] 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.940 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.231 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.231 "name": "Existed_Raid", 00:15:34.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.231 "strip_size_kb": 0, 00:15:34.231 "state": "configuring", 00:15:34.231 "raid_level": "raid1", 00:15:34.231 "superblock": false, 00:15:34.231 "num_base_bdevs": 2, 00:15:34.231 "num_base_bdevs_discovered": 1, 00:15:34.231 "num_base_bdevs_operational": 2, 00:15:34.231 "base_bdevs_list": [ 00:15:34.231 { 00:15:34.231 "name": "BaseBdev1", 00:15:34.231 "uuid": "05d00a7a-c95b-45b0-9eae-8c2dc475650b", 00:15:34.231 "is_configured": true, 00:15:34.231 "data_offset": 0, 00:15:34.231 "data_size": 65536 00:15:34.231 }, 00:15:34.231 { 00:15:34.231 "name": "BaseBdev2", 00:15:34.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.231 "is_configured": false, 00:15:34.231 "data_offset": 0, 00:15:34.231 "data_size": 0 00:15:34.231 } 00:15:34.231 ] 00:15:34.231 }' 00:15:34.231 02:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.231 02:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.800 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:34.800 [2024-07-11 02:21:25.170667] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.800 [2024-07-11 02:21:25.170704] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1600040 name Existed_Raid, state configuring 00:15:34.800 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:35.059 [2024-07-11 02:21:25.419352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.060 [2024-07-11 02:21:25.420747] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.060 [2024-07-11 02:21:25.420789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.060 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.319 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.319 "name": "Existed_Raid", 00:15:35.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.319 "strip_size_kb": 0, 00:15:35.319 "state": "configuring", 00:15:35.319 "raid_level": "raid1", 00:15:35.319 "superblock": false, 00:15:35.319 "num_base_bdevs": 2, 00:15:35.319 "num_base_bdevs_discovered": 1, 00:15:35.319 "num_base_bdevs_operational": 2, 00:15:35.319 "base_bdevs_list": [ 00:15:35.319 { 00:15:35.319 "name": "BaseBdev1", 00:15:35.319 "uuid": "05d00a7a-c95b-45b0-9eae-8c2dc475650b", 00:15:35.319 "is_configured": true, 00:15:35.319 "data_offset": 0, 00:15:35.319 "data_size": 65536 00:15:35.319 }, 00:15:35.319 { 00:15:35.319 "name": "BaseBdev2", 00:15:35.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.319 "is_configured": false, 00:15:35.319 "data_offset": 0, 00:15:35.319 "data_size": 0 00:15:35.319 } 00:15:35.319 ] 00:15:35.319 }' 00:15:35.319 02:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.319 02:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.888 02:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.147 [2024-07-11 02:21:26.513594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.147 [2024-07-11 02:21:26.513629] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17b2bd0 00:15:36.147 [2024-07-11 02:21:26.513638] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:36.147 [2024-07-11 02:21:26.513889] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17b4cc0 00:15:36.147 [2024-07-11 02:21:26.514006] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17b2bd0 00:15:36.147 [2024-07-11 02:21:26.514016] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17b2bd0 00:15:36.147 [2024-07-11 02:21:26.514177] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.147 BaseBdev2 00:15:36.147 02:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:36.147 02:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:36.147 02:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:36.147 02:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:36.147 02:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:36.147 02:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:36.147 02:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.406 02:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.665 [ 00:15:36.665 { 00:15:36.665 "name": "BaseBdev2", 00:15:36.665 "aliases": [ 00:15:36.665 "910ef25c-47da-4f5a-aeac-59e091376f02" 00:15:36.665 ], 00:15:36.665 "product_name": "Malloc disk", 00:15:36.665 "block_size": 512, 00:15:36.665 "num_blocks": 65536, 00:15:36.665 "uuid": "910ef25c-47da-4f5a-aeac-59e091376f02", 00:15:36.665 "assigned_rate_limits": { 00:15:36.665 "rw_ios_per_sec": 0, 00:15:36.665 "rw_mbytes_per_sec": 0, 00:15:36.665 "r_mbytes_per_sec": 0, 00:15:36.665 "w_mbytes_per_sec": 0 00:15:36.665 }, 00:15:36.665 "claimed": true, 00:15:36.665 "claim_type": "exclusive_write", 00:15:36.665 "zoned": false, 00:15:36.665 "supported_io_types": { 00:15:36.665 "read": true, 00:15:36.665 "write": true, 00:15:36.665 "unmap": true, 00:15:36.665 "flush": true, 00:15:36.665 "reset": true, 00:15:36.665 "nvme_admin": false, 00:15:36.665 "nvme_io": false, 00:15:36.665 "nvme_io_md": false, 00:15:36.665 "write_zeroes": true, 00:15:36.665 "zcopy": true, 00:15:36.665 "get_zone_info": false, 00:15:36.665 "zone_management": false, 00:15:36.665 "zone_append": false, 00:15:36.665 "compare": false, 00:15:36.665 "compare_and_write": false, 00:15:36.665 "abort": true, 00:15:36.665 "seek_hole": false, 00:15:36.665 "seek_data": false, 00:15:36.665 "copy": true, 00:15:36.665 "nvme_iov_md": false 00:15:36.665 }, 00:15:36.665 "memory_domains": [ 00:15:36.665 { 00:15:36.665 "dma_device_id": "system", 00:15:36.665 "dma_device_type": 1 00:15:36.665 }, 00:15:36.665 { 00:15:36.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.665 "dma_device_type": 2 00:15:36.665 } 00:15:36.665 ], 00:15:36.665 "driver_specific": {} 00:15:36.665 } 00:15:36.665 ] 00:15:36.665 02:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.666 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.925 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.925 "name": "Existed_Raid", 00:15:36.925 "uuid": "98d73460-528f-4c9a-9995-900fa660d7e2", 00:15:36.925 "strip_size_kb": 0, 00:15:36.925 "state": "online", 00:15:36.925 "raid_level": "raid1", 00:15:36.925 "superblock": false, 00:15:36.925 "num_base_bdevs": 2, 00:15:36.925 "num_base_bdevs_discovered": 2, 00:15:36.925 "num_base_bdevs_operational": 2, 00:15:36.925 "base_bdevs_list": [ 00:15:36.925 { 00:15:36.925 "name": "BaseBdev1", 00:15:36.925 "uuid": "05d00a7a-c95b-45b0-9eae-8c2dc475650b", 00:15:36.925 "is_configured": true, 00:15:36.925 "data_offset": 0, 00:15:36.925 "data_size": 65536 00:15:36.925 }, 00:15:36.925 { 00:15:36.925 "name": "BaseBdev2", 00:15:36.925 "uuid": "910ef25c-47da-4f5a-aeac-59e091376f02", 00:15:36.925 "is_configured": true, 00:15:36.925 "data_offset": 0, 00:15:36.925 "data_size": 65536 00:15:36.925 } 00:15:36.925 ] 00:15:36.925 }' 00:15:36.925 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.925 02:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:37.492 02:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:37.751 [2024-07-11 02:21:28.025904] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.751 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:37.751 "name": "Existed_Raid", 00:15:37.751 "aliases": [ 00:15:37.751 "98d73460-528f-4c9a-9995-900fa660d7e2" 00:15:37.751 ], 00:15:37.751 "product_name": "Raid Volume", 00:15:37.751 "block_size": 512, 00:15:37.751 "num_blocks": 65536, 00:15:37.751 "uuid": "98d73460-528f-4c9a-9995-900fa660d7e2", 00:15:37.751 "assigned_rate_limits": { 00:15:37.751 "rw_ios_per_sec": 0, 00:15:37.751 "rw_mbytes_per_sec": 0, 00:15:37.751 "r_mbytes_per_sec": 0, 00:15:37.751 "w_mbytes_per_sec": 0 00:15:37.751 }, 00:15:37.751 "claimed": false, 00:15:37.751 "zoned": false, 00:15:37.751 "supported_io_types": { 00:15:37.751 "read": true, 00:15:37.751 "write": true, 00:15:37.751 "unmap": false, 00:15:37.751 "flush": false, 00:15:37.751 "reset": true, 00:15:37.751 "nvme_admin": false, 00:15:37.751 "nvme_io": false, 00:15:37.751 "nvme_io_md": false, 00:15:37.751 "write_zeroes": true, 00:15:37.751 "zcopy": false, 00:15:37.751 "get_zone_info": false, 00:15:37.751 "zone_management": false, 00:15:37.751 "zone_append": false, 00:15:37.751 "compare": false, 00:15:37.751 "compare_and_write": false, 00:15:37.751 "abort": false, 00:15:37.751 "seek_hole": false, 00:15:37.751 "seek_data": false, 00:15:37.751 "copy": false, 00:15:37.751 "nvme_iov_md": false 00:15:37.751 }, 00:15:37.751 "memory_domains": [ 00:15:37.751 { 00:15:37.751 "dma_device_id": "system", 00:15:37.751 "dma_device_type": 1 00:15:37.751 }, 00:15:37.751 { 00:15:37.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.751 "dma_device_type": 2 00:15:37.751 }, 00:15:37.751 { 00:15:37.751 "dma_device_id": "system", 00:15:37.751 "dma_device_type": 1 00:15:37.751 }, 00:15:37.751 { 00:15:37.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.751 "dma_device_type": 2 00:15:37.751 } 00:15:37.751 ], 00:15:37.751 "driver_specific": { 00:15:37.752 "raid": { 00:15:37.752 "uuid": "98d73460-528f-4c9a-9995-900fa660d7e2", 00:15:37.752 "strip_size_kb": 0, 00:15:37.752 "state": "online", 00:15:37.752 "raid_level": "raid1", 00:15:37.752 "superblock": false, 00:15:37.752 "num_base_bdevs": 2, 00:15:37.752 "num_base_bdevs_discovered": 2, 00:15:37.752 "num_base_bdevs_operational": 2, 00:15:37.752 "base_bdevs_list": [ 00:15:37.752 { 00:15:37.752 "name": "BaseBdev1", 00:15:37.752 "uuid": "05d00a7a-c95b-45b0-9eae-8c2dc475650b", 00:15:37.752 "is_configured": true, 00:15:37.752 "data_offset": 0, 00:15:37.752 "data_size": 65536 00:15:37.752 }, 00:15:37.752 { 00:15:37.752 "name": "BaseBdev2", 00:15:37.752 "uuid": "910ef25c-47da-4f5a-aeac-59e091376f02", 00:15:37.752 "is_configured": true, 00:15:37.752 "data_offset": 0, 00:15:37.752 "data_size": 65536 00:15:37.752 } 00:15:37.752 ] 00:15:37.752 } 00:15:37.752 } 00:15:37.752 }' 00:15:37.752 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.752 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:37.752 BaseBdev2' 00:15:37.752 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.752 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:37.752 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.011 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.011 "name": "BaseBdev1", 00:15:38.011 "aliases": [ 00:15:38.011 "05d00a7a-c95b-45b0-9eae-8c2dc475650b" 00:15:38.011 ], 00:15:38.011 "product_name": "Malloc disk", 00:15:38.011 "block_size": 512, 00:15:38.011 "num_blocks": 65536, 00:15:38.011 "uuid": "05d00a7a-c95b-45b0-9eae-8c2dc475650b", 00:15:38.011 "assigned_rate_limits": { 00:15:38.011 "rw_ios_per_sec": 0, 00:15:38.011 "rw_mbytes_per_sec": 0, 00:15:38.011 "r_mbytes_per_sec": 0, 00:15:38.011 "w_mbytes_per_sec": 0 00:15:38.011 }, 00:15:38.011 "claimed": true, 00:15:38.011 "claim_type": "exclusive_write", 00:15:38.011 "zoned": false, 00:15:38.011 "supported_io_types": { 00:15:38.011 "read": true, 00:15:38.011 "write": true, 00:15:38.011 "unmap": true, 00:15:38.011 "flush": true, 00:15:38.011 "reset": true, 00:15:38.011 "nvme_admin": false, 00:15:38.011 "nvme_io": false, 00:15:38.011 "nvme_io_md": false, 00:15:38.011 "write_zeroes": true, 00:15:38.011 "zcopy": true, 00:15:38.011 "get_zone_info": false, 00:15:38.011 "zone_management": false, 00:15:38.011 "zone_append": false, 00:15:38.011 "compare": false, 00:15:38.011 "compare_and_write": false, 00:15:38.011 "abort": true, 00:15:38.011 "seek_hole": false, 00:15:38.011 "seek_data": false, 00:15:38.011 "copy": true, 00:15:38.011 "nvme_iov_md": false 00:15:38.011 }, 00:15:38.011 "memory_domains": [ 00:15:38.011 { 00:15:38.011 "dma_device_id": "system", 00:15:38.011 "dma_device_type": 1 00:15:38.011 }, 00:15:38.011 { 00:15:38.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.011 "dma_device_type": 2 00:15:38.011 } 00:15:38.011 ], 00:15:38.011 "driver_specific": {} 00:15:38.011 }' 00:15:38.011 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.011 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.270 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.529 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.529 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.529 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.529 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:38.788 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.788 "name": "BaseBdev2", 00:15:38.788 "aliases": [ 00:15:38.788 "910ef25c-47da-4f5a-aeac-59e091376f02" 00:15:38.788 ], 00:15:38.788 "product_name": "Malloc disk", 00:15:38.788 "block_size": 512, 00:15:38.788 "num_blocks": 65536, 00:15:38.788 "uuid": "910ef25c-47da-4f5a-aeac-59e091376f02", 00:15:38.788 "assigned_rate_limits": { 00:15:38.788 "rw_ios_per_sec": 0, 00:15:38.788 "rw_mbytes_per_sec": 0, 00:15:38.788 "r_mbytes_per_sec": 0, 00:15:38.788 "w_mbytes_per_sec": 0 00:15:38.788 }, 00:15:38.788 "claimed": true, 00:15:38.788 "claim_type": "exclusive_write", 00:15:38.788 "zoned": false, 00:15:38.788 "supported_io_types": { 00:15:38.788 "read": true, 00:15:38.788 "write": true, 00:15:38.788 "unmap": true, 00:15:38.788 "flush": true, 00:15:38.788 "reset": true, 00:15:38.788 "nvme_admin": false, 00:15:38.788 "nvme_io": false, 00:15:38.788 "nvme_io_md": false, 00:15:38.788 "write_zeroes": true, 00:15:38.788 "zcopy": true, 00:15:38.788 "get_zone_info": false, 00:15:38.788 "zone_management": false, 00:15:38.788 "zone_append": false, 00:15:38.788 "compare": false, 00:15:38.788 "compare_and_write": false, 00:15:38.788 "abort": true, 00:15:38.788 "seek_hole": false, 00:15:38.788 "seek_data": false, 00:15:38.788 "copy": true, 00:15:38.788 "nvme_iov_md": false 00:15:38.788 }, 00:15:38.788 "memory_domains": [ 00:15:38.788 { 00:15:38.788 "dma_device_id": "system", 00:15:38.788 "dma_device_type": 1 00:15:38.788 }, 00:15:38.788 { 00:15:38.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.788 "dma_device_type": 2 00:15:38.788 } 00:15:38.788 ], 00:15:38.788 "driver_specific": {} 00:15:38.788 }' 00:15:38.788 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.788 02:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.788 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.049 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.049 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.049 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:39.308 [2024-07-11 02:21:29.501593] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.308 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.568 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.568 "name": "Existed_Raid", 00:15:39.568 "uuid": "98d73460-528f-4c9a-9995-900fa660d7e2", 00:15:39.568 "strip_size_kb": 0, 00:15:39.568 "state": "online", 00:15:39.568 "raid_level": "raid1", 00:15:39.568 "superblock": false, 00:15:39.568 "num_base_bdevs": 2, 00:15:39.568 "num_base_bdevs_discovered": 1, 00:15:39.568 "num_base_bdevs_operational": 1, 00:15:39.568 "base_bdevs_list": [ 00:15:39.568 { 00:15:39.568 "name": null, 00:15:39.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.568 "is_configured": false, 00:15:39.568 "data_offset": 0, 00:15:39.568 "data_size": 65536 00:15:39.568 }, 00:15:39.568 { 00:15:39.568 "name": "BaseBdev2", 00:15:39.568 "uuid": "910ef25c-47da-4f5a-aeac-59e091376f02", 00:15:39.568 "is_configured": true, 00:15:39.568 "data_offset": 0, 00:15:39.568 "data_size": 65536 00:15:39.568 } 00:15:39.568 ] 00:15:39.568 }' 00:15:39.568 02:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.568 02:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.135 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:40.135 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:40.135 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.135 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:40.394 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:40.394 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.394 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:40.652 [2024-07-11 02:21:30.886311] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.652 [2024-07-11 02:21:30.886389] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.652 [2024-07-11 02:21:30.897437] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.652 [2024-07-11 02:21:30.897472] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.652 [2024-07-11 02:21:30.897484] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b2bd0 name Existed_Raid, state offline 00:15:40.652 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:40.652 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:40.652 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.652 02:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1905491 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1905491 ']' 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1905491 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1905491 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1905491' 00:15:40.911 killing process with pid 1905491 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1905491 00:15:40.911 [2024-07-11 02:21:31.218764] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.911 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1905491 00:15:40.911 [2024-07-11 02:21:31.219622] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.170 02:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:41.170 00:15:41.170 real 0m10.515s 00:15:41.170 user 0m18.638s 00:15:41.170 sys 0m2.002s 00:15:41.170 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.170 02:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.170 ************************************ 00:15:41.170 END TEST raid_state_function_test 00:15:41.170 ************************************ 00:15:41.170 02:21:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:41.170 02:21:31 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:41.171 02:21:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:41.171 02:21:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.171 02:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.171 ************************************ 00:15:41.171 START TEST raid_state_function_test_sb 00:15:41.171 ************************************ 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1907115 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1907115' 00:15:41.171 Process raid pid: 1907115 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1907115 /var/tmp/spdk-raid.sock 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1907115 ']' 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.171 02:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.430 [2024-07-11 02:21:31.616426] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:15:41.430 [2024-07-11 02:21:31.616557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.430 [2024-07-11 02:21:31.829792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.688 [2024-07-11 02:21:31.881345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.688 [2024-07-11 02:21:31.942116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.688 [2024-07-11 02:21:31.942145] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.624 02:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.624 02:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:42.624 02:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:42.624 [2024-07-11 02:21:32.998198] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.624 [2024-07-11 02:21:32.998240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.624 [2024-07-11 02:21:32.998251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.624 [2024-07-11 02:21:32.998263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.624 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.883 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.883 "name": "Existed_Raid", 00:15:42.883 "uuid": "04bcdc5b-c870-4e7e-9796-280df1a4dd84", 00:15:42.883 "strip_size_kb": 0, 00:15:42.883 "state": "configuring", 00:15:42.883 "raid_level": "raid1", 00:15:42.883 "superblock": true, 00:15:42.883 "num_base_bdevs": 2, 00:15:42.883 "num_base_bdevs_discovered": 0, 00:15:42.883 "num_base_bdevs_operational": 2, 00:15:42.883 "base_bdevs_list": [ 00:15:42.883 { 00:15:42.883 "name": "BaseBdev1", 00:15:42.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.883 "is_configured": false, 00:15:42.883 "data_offset": 0, 00:15:42.883 "data_size": 0 00:15:42.883 }, 00:15:42.883 { 00:15:42.883 "name": "BaseBdev2", 00:15:42.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.883 "is_configured": false, 00:15:42.883 "data_offset": 0, 00:15:42.883 "data_size": 0 00:15:42.883 } 00:15:42.883 ] 00:15:42.883 }' 00:15:42.883 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.883 02:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 02:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:43.707 [2024-07-11 02:21:34.092965] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.707 [2024-07-11 02:21:34.092996] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1608710 name Existed_Raid, state configuring 00:15:43.707 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:43.966 [2024-07-11 02:21:34.341636] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.966 [2024-07-11 02:21:34.341667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.966 [2024-07-11 02:21:34.341677] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.966 [2024-07-11 02:21:34.341689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.966 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.224 [2024-07-11 02:21:34.599999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.224 BaseBdev1 00:15:44.224 02:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:44.224 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:44.224 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:44.224 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:44.224 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:44.224 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:44.224 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.483 02:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.742 [ 00:15:44.742 { 00:15:44.742 "name": "BaseBdev1", 00:15:44.742 "aliases": [ 00:15:44.742 "14292690-98f8-48cc-bdc1-14dc046a6bff" 00:15:44.742 ], 00:15:44.742 "product_name": "Malloc disk", 00:15:44.742 "block_size": 512, 00:15:44.742 "num_blocks": 65536, 00:15:44.742 "uuid": "14292690-98f8-48cc-bdc1-14dc046a6bff", 00:15:44.742 "assigned_rate_limits": { 00:15:44.742 "rw_ios_per_sec": 0, 00:15:44.742 "rw_mbytes_per_sec": 0, 00:15:44.742 "r_mbytes_per_sec": 0, 00:15:44.742 "w_mbytes_per_sec": 0 00:15:44.742 }, 00:15:44.742 "claimed": true, 00:15:44.742 "claim_type": "exclusive_write", 00:15:44.742 "zoned": false, 00:15:44.742 "supported_io_types": { 00:15:44.742 "read": true, 00:15:44.742 "write": true, 00:15:44.742 "unmap": true, 00:15:44.742 "flush": true, 00:15:44.742 "reset": true, 00:15:44.742 "nvme_admin": false, 00:15:44.742 "nvme_io": false, 00:15:44.742 "nvme_io_md": false, 00:15:44.742 "write_zeroes": true, 00:15:44.742 "zcopy": true, 00:15:44.742 "get_zone_info": false, 00:15:44.742 "zone_management": false, 00:15:44.742 "zone_append": false, 00:15:44.742 "compare": false, 00:15:44.742 "compare_and_write": false, 00:15:44.742 "abort": true, 00:15:44.742 "seek_hole": false, 00:15:44.742 "seek_data": false, 00:15:44.742 "copy": true, 00:15:44.742 "nvme_iov_md": false 00:15:44.742 }, 00:15:44.742 "memory_domains": [ 00:15:44.742 { 00:15:44.742 "dma_device_id": "system", 00:15:44.742 "dma_device_type": 1 00:15:44.742 }, 00:15:44.742 { 00:15:44.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.742 "dma_device_type": 2 00:15:44.742 } 00:15:44.742 ], 00:15:44.742 "driver_specific": {} 00:15:44.742 } 00:15:44.742 ] 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.742 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.001 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.001 "name": "Existed_Raid", 00:15:45.001 "uuid": "f17c0b77-5a5c-4699-9cbb-3abcd200a151", 00:15:45.001 "strip_size_kb": 0, 00:15:45.001 "state": "configuring", 00:15:45.001 "raid_level": "raid1", 00:15:45.001 "superblock": true, 00:15:45.001 "num_base_bdevs": 2, 00:15:45.001 "num_base_bdevs_discovered": 1, 00:15:45.001 "num_base_bdevs_operational": 2, 00:15:45.001 "base_bdevs_list": [ 00:15:45.001 { 00:15:45.001 "name": "BaseBdev1", 00:15:45.001 "uuid": "14292690-98f8-48cc-bdc1-14dc046a6bff", 00:15:45.001 "is_configured": true, 00:15:45.001 "data_offset": 2048, 00:15:45.001 "data_size": 63488 00:15:45.001 }, 00:15:45.001 { 00:15:45.001 "name": "BaseBdev2", 00:15:45.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.001 "is_configured": false, 00:15:45.001 "data_offset": 0, 00:15:45.001 "data_size": 0 00:15:45.001 } 00:15:45.001 ] 00:15:45.001 }' 00:15:45.001 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.001 02:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.568 02:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:45.827 [2024-07-11 02:21:36.120044] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.827 [2024-07-11 02:21:36.120082] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1608040 name Existed_Raid, state configuring 00:15:45.827 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:46.086 [2024-07-11 02:21:36.368737] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.086 [2024-07-11 02:21:36.370151] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.086 [2024-07-11 02:21:36.370183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.086 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.345 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.345 "name": "Existed_Raid", 00:15:46.345 "uuid": "53c536b3-89a9-445a-bac6-cab7580b3155", 00:15:46.345 "strip_size_kb": 0, 00:15:46.345 "state": "configuring", 00:15:46.345 "raid_level": "raid1", 00:15:46.345 "superblock": true, 00:15:46.345 "num_base_bdevs": 2, 00:15:46.345 "num_base_bdevs_discovered": 1, 00:15:46.345 "num_base_bdevs_operational": 2, 00:15:46.345 "base_bdevs_list": [ 00:15:46.345 { 00:15:46.345 "name": "BaseBdev1", 00:15:46.345 "uuid": "14292690-98f8-48cc-bdc1-14dc046a6bff", 00:15:46.345 "is_configured": true, 00:15:46.345 "data_offset": 2048, 00:15:46.345 "data_size": 63488 00:15:46.345 }, 00:15:46.345 { 00:15:46.345 "name": "BaseBdev2", 00:15:46.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.345 "is_configured": false, 00:15:46.345 "data_offset": 0, 00:15:46.345 "data_size": 0 00:15:46.345 } 00:15:46.345 ] 00:15:46.345 }' 00:15:46.345 02:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.345 02:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.912 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:47.171 [2024-07-11 02:21:37.479618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.171 [2024-07-11 02:21:37.479768] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17babd0 00:15:47.171 [2024-07-11 02:21:37.479787] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:47.171 [2024-07-11 02:21:37.479969] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x160a7c0 00:15:47.171 [2024-07-11 02:21:37.480090] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17babd0 00:15:47.171 [2024-07-11 02:21:37.480100] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17babd0 00:15:47.171 [2024-07-11 02:21:37.480192] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.171 BaseBdev2 00:15:47.171 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:47.171 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:47.171 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:47.171 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:47.171 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:47.171 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:47.171 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.430 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:47.688 [ 00:15:47.688 { 00:15:47.689 "name": "BaseBdev2", 00:15:47.689 "aliases": [ 00:15:47.689 "d7d8346d-2e60-4684-84c1-b660b95f21b0" 00:15:47.689 ], 00:15:47.689 "product_name": "Malloc disk", 00:15:47.689 "block_size": 512, 00:15:47.689 "num_blocks": 65536, 00:15:47.689 "uuid": "d7d8346d-2e60-4684-84c1-b660b95f21b0", 00:15:47.689 "assigned_rate_limits": { 00:15:47.689 "rw_ios_per_sec": 0, 00:15:47.689 "rw_mbytes_per_sec": 0, 00:15:47.689 "r_mbytes_per_sec": 0, 00:15:47.689 "w_mbytes_per_sec": 0 00:15:47.689 }, 00:15:47.689 "claimed": true, 00:15:47.689 "claim_type": "exclusive_write", 00:15:47.689 "zoned": false, 00:15:47.689 "supported_io_types": { 00:15:47.689 "read": true, 00:15:47.689 "write": true, 00:15:47.689 "unmap": true, 00:15:47.689 "flush": true, 00:15:47.689 "reset": true, 00:15:47.689 "nvme_admin": false, 00:15:47.689 "nvme_io": false, 00:15:47.689 "nvme_io_md": false, 00:15:47.689 "write_zeroes": true, 00:15:47.689 "zcopy": true, 00:15:47.689 "get_zone_info": false, 00:15:47.689 "zone_management": false, 00:15:47.689 "zone_append": false, 00:15:47.689 "compare": false, 00:15:47.689 "compare_and_write": false, 00:15:47.689 "abort": true, 00:15:47.689 "seek_hole": false, 00:15:47.689 "seek_data": false, 00:15:47.689 "copy": true, 00:15:47.689 "nvme_iov_md": false 00:15:47.689 }, 00:15:47.689 "memory_domains": [ 00:15:47.689 { 00:15:47.689 "dma_device_id": "system", 00:15:47.689 "dma_device_type": 1 00:15:47.689 }, 00:15:47.689 { 00:15:47.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.689 "dma_device_type": 2 00:15:47.689 } 00:15:47.689 ], 00:15:47.689 "driver_specific": {} 00:15:47.689 } 00:15:47.689 ] 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.689 02:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.948 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.948 "name": "Existed_Raid", 00:15:47.948 "uuid": "53c536b3-89a9-445a-bac6-cab7580b3155", 00:15:47.948 "strip_size_kb": 0, 00:15:47.948 "state": "online", 00:15:47.948 "raid_level": "raid1", 00:15:47.948 "superblock": true, 00:15:47.948 "num_base_bdevs": 2, 00:15:47.948 "num_base_bdevs_discovered": 2, 00:15:47.948 "num_base_bdevs_operational": 2, 00:15:47.948 "base_bdevs_list": [ 00:15:47.948 { 00:15:47.948 "name": "BaseBdev1", 00:15:47.948 "uuid": "14292690-98f8-48cc-bdc1-14dc046a6bff", 00:15:47.948 "is_configured": true, 00:15:47.948 "data_offset": 2048, 00:15:47.948 "data_size": 63488 00:15:47.948 }, 00:15:47.948 { 00:15:47.948 "name": "BaseBdev2", 00:15:47.948 "uuid": "d7d8346d-2e60-4684-84c1-b660b95f21b0", 00:15:47.948 "is_configured": true, 00:15:47.948 "data_offset": 2048, 00:15:47.948 "data_size": 63488 00:15:47.948 } 00:15:47.948 ] 00:15:47.948 }' 00:15:47.948 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.948 02:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:48.515 02:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:48.774 [2024-07-11 02:21:39.060272] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.774 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:48.774 "name": "Existed_Raid", 00:15:48.774 "aliases": [ 00:15:48.774 "53c536b3-89a9-445a-bac6-cab7580b3155" 00:15:48.774 ], 00:15:48.774 "product_name": "Raid Volume", 00:15:48.774 "block_size": 512, 00:15:48.774 "num_blocks": 63488, 00:15:48.774 "uuid": "53c536b3-89a9-445a-bac6-cab7580b3155", 00:15:48.774 "assigned_rate_limits": { 00:15:48.774 "rw_ios_per_sec": 0, 00:15:48.774 "rw_mbytes_per_sec": 0, 00:15:48.774 "r_mbytes_per_sec": 0, 00:15:48.774 "w_mbytes_per_sec": 0 00:15:48.774 }, 00:15:48.774 "claimed": false, 00:15:48.774 "zoned": false, 00:15:48.774 "supported_io_types": { 00:15:48.774 "read": true, 00:15:48.774 "write": true, 00:15:48.775 "unmap": false, 00:15:48.775 "flush": false, 00:15:48.775 "reset": true, 00:15:48.775 "nvme_admin": false, 00:15:48.775 "nvme_io": false, 00:15:48.775 "nvme_io_md": false, 00:15:48.775 "write_zeroes": true, 00:15:48.775 "zcopy": false, 00:15:48.775 "get_zone_info": false, 00:15:48.775 "zone_management": false, 00:15:48.775 "zone_append": false, 00:15:48.775 "compare": false, 00:15:48.775 "compare_and_write": false, 00:15:48.775 "abort": false, 00:15:48.775 "seek_hole": false, 00:15:48.775 "seek_data": false, 00:15:48.775 "copy": false, 00:15:48.775 "nvme_iov_md": false 00:15:48.775 }, 00:15:48.775 "memory_domains": [ 00:15:48.775 { 00:15:48.775 "dma_device_id": "system", 00:15:48.775 "dma_device_type": 1 00:15:48.775 }, 00:15:48.775 { 00:15:48.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.775 "dma_device_type": 2 00:15:48.775 }, 00:15:48.775 { 00:15:48.775 "dma_device_id": "system", 00:15:48.775 "dma_device_type": 1 00:15:48.775 }, 00:15:48.775 { 00:15:48.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.775 "dma_device_type": 2 00:15:48.775 } 00:15:48.775 ], 00:15:48.775 "driver_specific": { 00:15:48.775 "raid": { 00:15:48.775 "uuid": "53c536b3-89a9-445a-bac6-cab7580b3155", 00:15:48.775 "strip_size_kb": 0, 00:15:48.775 "state": "online", 00:15:48.775 "raid_level": "raid1", 00:15:48.775 "superblock": true, 00:15:48.775 "num_base_bdevs": 2, 00:15:48.775 "num_base_bdevs_discovered": 2, 00:15:48.775 "num_base_bdevs_operational": 2, 00:15:48.775 "base_bdevs_list": [ 00:15:48.775 { 00:15:48.775 "name": "BaseBdev1", 00:15:48.775 "uuid": "14292690-98f8-48cc-bdc1-14dc046a6bff", 00:15:48.775 "is_configured": true, 00:15:48.775 "data_offset": 2048, 00:15:48.775 "data_size": 63488 00:15:48.775 }, 00:15:48.775 { 00:15:48.775 "name": "BaseBdev2", 00:15:48.775 "uuid": "d7d8346d-2e60-4684-84c1-b660b95f21b0", 00:15:48.775 "is_configured": true, 00:15:48.775 "data_offset": 2048, 00:15:48.775 "data_size": 63488 00:15:48.775 } 00:15:48.775 ] 00:15:48.775 } 00:15:48.775 } 00:15:48.775 }' 00:15:48.775 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.775 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:48.775 BaseBdev2' 00:15:48.775 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:48.775 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:48.775 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.385 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.385 "name": "BaseBdev1", 00:15:49.385 "aliases": [ 00:15:49.385 "14292690-98f8-48cc-bdc1-14dc046a6bff" 00:15:49.385 ], 00:15:49.385 "product_name": "Malloc disk", 00:15:49.385 "block_size": 512, 00:15:49.385 "num_blocks": 65536, 00:15:49.385 "uuid": "14292690-98f8-48cc-bdc1-14dc046a6bff", 00:15:49.385 "assigned_rate_limits": { 00:15:49.385 "rw_ios_per_sec": 0, 00:15:49.385 "rw_mbytes_per_sec": 0, 00:15:49.385 "r_mbytes_per_sec": 0, 00:15:49.385 "w_mbytes_per_sec": 0 00:15:49.385 }, 00:15:49.385 "claimed": true, 00:15:49.385 "claim_type": "exclusive_write", 00:15:49.385 "zoned": false, 00:15:49.385 "supported_io_types": { 00:15:49.385 "read": true, 00:15:49.385 "write": true, 00:15:49.385 "unmap": true, 00:15:49.385 "flush": true, 00:15:49.385 "reset": true, 00:15:49.385 "nvme_admin": false, 00:15:49.385 "nvme_io": false, 00:15:49.385 "nvme_io_md": false, 00:15:49.385 "write_zeroes": true, 00:15:49.385 "zcopy": true, 00:15:49.385 "get_zone_info": false, 00:15:49.385 "zone_management": false, 00:15:49.385 "zone_append": false, 00:15:49.385 "compare": false, 00:15:49.385 "compare_and_write": false, 00:15:49.385 "abort": true, 00:15:49.385 "seek_hole": false, 00:15:49.385 "seek_data": false, 00:15:49.385 "copy": true, 00:15:49.385 "nvme_iov_md": false 00:15:49.385 }, 00:15:49.385 "memory_domains": [ 00:15:49.385 { 00:15:49.385 "dma_device_id": "system", 00:15:49.385 "dma_device_type": 1 00:15:49.385 }, 00:15:49.385 { 00:15:49.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.385 "dma_device_type": 2 00:15:49.385 } 00:15:49.385 ], 00:15:49.385 "driver_specific": {} 00:15:49.385 }' 00:15:49.385 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.385 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.385 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.385 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.695 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.695 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.695 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.695 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.695 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:49.695 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.695 02:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.695 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.695 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.695 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:49.695 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.954 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.954 "name": "BaseBdev2", 00:15:49.954 "aliases": [ 00:15:49.954 "d7d8346d-2e60-4684-84c1-b660b95f21b0" 00:15:49.954 ], 00:15:49.954 "product_name": "Malloc disk", 00:15:49.954 "block_size": 512, 00:15:49.954 "num_blocks": 65536, 00:15:49.954 "uuid": "d7d8346d-2e60-4684-84c1-b660b95f21b0", 00:15:49.954 "assigned_rate_limits": { 00:15:49.954 "rw_ios_per_sec": 0, 00:15:49.954 "rw_mbytes_per_sec": 0, 00:15:49.954 "r_mbytes_per_sec": 0, 00:15:49.954 "w_mbytes_per_sec": 0 00:15:49.954 }, 00:15:49.954 "claimed": true, 00:15:49.954 "claim_type": "exclusive_write", 00:15:49.954 "zoned": false, 00:15:49.954 "supported_io_types": { 00:15:49.954 "read": true, 00:15:49.954 "write": true, 00:15:49.954 "unmap": true, 00:15:49.954 "flush": true, 00:15:49.954 "reset": true, 00:15:49.954 "nvme_admin": false, 00:15:49.954 "nvme_io": false, 00:15:49.954 "nvme_io_md": false, 00:15:49.954 "write_zeroes": true, 00:15:49.954 "zcopy": true, 00:15:49.954 "get_zone_info": false, 00:15:49.954 "zone_management": false, 00:15:49.954 "zone_append": false, 00:15:49.954 "compare": false, 00:15:49.954 "compare_and_write": false, 00:15:49.954 "abort": true, 00:15:49.954 "seek_hole": false, 00:15:49.954 "seek_data": false, 00:15:49.954 "copy": true, 00:15:49.954 "nvme_iov_md": false 00:15:49.954 }, 00:15:49.954 "memory_domains": [ 00:15:49.954 { 00:15:49.954 "dma_device_id": "system", 00:15:49.954 "dma_device_type": 1 00:15:49.954 }, 00:15:49.954 { 00:15:49.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.954 "dma_device_type": 2 00:15:49.954 } 00:15:49.954 ], 00:15:49.954 "driver_specific": {} 00:15:49.954 }' 00:15:49.954 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.954 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.213 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:50.472 [2024-07-11 02:21:40.852812] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.472 02:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.731 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.731 "name": "Existed_Raid", 00:15:50.731 "uuid": "53c536b3-89a9-445a-bac6-cab7580b3155", 00:15:50.731 "strip_size_kb": 0, 00:15:50.731 "state": "online", 00:15:50.731 "raid_level": "raid1", 00:15:50.731 "superblock": true, 00:15:50.731 "num_base_bdevs": 2, 00:15:50.731 "num_base_bdevs_discovered": 1, 00:15:50.731 "num_base_bdevs_operational": 1, 00:15:50.731 "base_bdevs_list": [ 00:15:50.731 { 00:15:50.731 "name": null, 00:15:50.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.731 "is_configured": false, 00:15:50.731 "data_offset": 2048, 00:15:50.731 "data_size": 63488 00:15:50.731 }, 00:15:50.731 { 00:15:50.731 "name": "BaseBdev2", 00:15:50.731 "uuid": "d7d8346d-2e60-4684-84c1-b660b95f21b0", 00:15:50.731 "is_configured": true, 00:15:50.731 "data_offset": 2048, 00:15:50.731 "data_size": 63488 00:15:50.731 } 00:15:50.731 ] 00:15:50.731 }' 00:15:50.731 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.731 02:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.298 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:51.298 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:51.557 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.557 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:51.558 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:51.558 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:51.558 02:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:51.817 [2024-07-11 02:21:42.201797] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:51.817 [2024-07-11 02:21:42.201881] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.817 [2024-07-11 02:21:42.212792] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.817 [2024-07-11 02:21:42.212828] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.817 [2024-07-11 02:21:42.212840] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17babd0 name Existed_Raid, state offline 00:15:51.817 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:51.817 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:51.817 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:52.076 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.076 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:52.076 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:52.076 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:52.076 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1907115 00:15:52.077 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1907115 ']' 00:15:52.077 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1907115 00:15:52.077 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:52.077 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.077 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1907115 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1907115' 00:15:52.336 killing process with pid 1907115 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1907115 00:15:52.336 [2024-07-11 02:21:42.540189] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1907115 00:15:52.336 [2024-07-11 02:21:42.541059] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:52.336 00:15:52.336 real 0m11.233s 00:15:52.336 user 0m19.849s 00:15:52.336 sys 0m2.279s 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.336 02:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.336 ************************************ 00:15:52.336 END TEST raid_state_function_test_sb 00:15:52.336 ************************************ 00:15:52.596 02:21:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:52.596 02:21:42 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:52.596 02:21:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:52.596 02:21:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.596 02:21:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.596 ************************************ 00:15:52.596 START TEST raid_superblock_test 00:15:52.596 ************************************ 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1908756 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1908756 /var/tmp/spdk-raid.sock 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1908756 ']' 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:52.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.596 02:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.596 [2024-07-11 02:21:42.895092] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:15:52.596 [2024-07-11 02:21:42.895172] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908756 ] 00:15:52.856 [2024-07-11 02:21:43.036581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.856 [2024-07-11 02:21:43.090379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.856 [2024-07-11 02:21:43.154491] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.856 [2024-07-11 02:21:43.154529] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.424 02:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:53.681 malloc1 00:15:53.681 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.940 [2024-07-11 02:21:44.241712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.940 [2024-07-11 02:21:44.241766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.940 [2024-07-11 02:21:44.241786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b8de0 00:15:53.940 [2024-07-11 02:21:44.241800] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.940 [2024-07-11 02:21:44.243365] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.940 [2024-07-11 02:21:44.243396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.940 pt1 00:15:53.940 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:53.940 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:53.940 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:53.940 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:53.940 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:53.940 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.941 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.941 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.941 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:54.509 malloc2 00:15:54.509 02:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.768 [2024-07-11 02:21:45.081907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.768 [2024-07-11 02:21:45.081962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.768 [2024-07-11 02:21:45.081981] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b0380 00:15:54.768 [2024-07-11 02:21:45.081994] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.768 [2024-07-11 02:21:45.083560] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.768 [2024-07-11 02:21:45.083592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.768 pt2 00:15:54.768 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:54.768 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:54.768 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:55.335 [2024-07-11 02:21:45.591254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.335 [2024-07-11 02:21:45.592600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.335 [2024-07-11 02:21:45.592748] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23ba9e0 00:15:55.335 [2024-07-11 02:21:45.592773] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.335 [2024-07-11 02:21:45.592982] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23b1a70 00:15:55.335 [2024-07-11 02:21:45.593137] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23ba9e0 00:15:55.335 [2024-07-11 02:21:45.593147] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23ba9e0 00:15:55.335 [2024-07-11 02:21:45.593252] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.335 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.335 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:55.335 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.336 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.595 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.595 "name": "raid_bdev1", 00:15:55.595 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:15:55.595 "strip_size_kb": 0, 00:15:55.595 "state": "online", 00:15:55.595 "raid_level": "raid1", 00:15:55.595 "superblock": true, 00:15:55.595 "num_base_bdevs": 2, 00:15:55.596 "num_base_bdevs_discovered": 2, 00:15:55.596 "num_base_bdevs_operational": 2, 00:15:55.596 "base_bdevs_list": [ 00:15:55.596 { 00:15:55.596 "name": "pt1", 00:15:55.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.596 "is_configured": true, 00:15:55.596 "data_offset": 2048, 00:15:55.596 "data_size": 63488 00:15:55.596 }, 00:15:55.596 { 00:15:55.596 "name": "pt2", 00:15:55.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.596 "is_configured": true, 00:15:55.596 "data_offset": 2048, 00:15:55.596 "data_size": 63488 00:15:55.596 } 00:15:55.596 ] 00:15:55.596 }' 00:15:55.596 02:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.596 02:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:56.533 [2024-07-11 02:21:46.914991] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.533 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:56.533 "name": "raid_bdev1", 00:15:56.533 "aliases": [ 00:15:56.533 "8b9e86af-47af-4241-a850-455955f01489" 00:15:56.533 ], 00:15:56.533 "product_name": "Raid Volume", 00:15:56.533 "block_size": 512, 00:15:56.533 "num_blocks": 63488, 00:15:56.533 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:15:56.533 "assigned_rate_limits": { 00:15:56.533 "rw_ios_per_sec": 0, 00:15:56.533 "rw_mbytes_per_sec": 0, 00:15:56.533 "r_mbytes_per_sec": 0, 00:15:56.533 "w_mbytes_per_sec": 0 00:15:56.533 }, 00:15:56.533 "claimed": false, 00:15:56.533 "zoned": false, 00:15:56.533 "supported_io_types": { 00:15:56.533 "read": true, 00:15:56.533 "write": true, 00:15:56.533 "unmap": false, 00:15:56.533 "flush": false, 00:15:56.533 "reset": true, 00:15:56.533 "nvme_admin": false, 00:15:56.533 "nvme_io": false, 00:15:56.533 "nvme_io_md": false, 00:15:56.533 "write_zeroes": true, 00:15:56.533 "zcopy": false, 00:15:56.533 "get_zone_info": false, 00:15:56.533 "zone_management": false, 00:15:56.533 "zone_append": false, 00:15:56.533 "compare": false, 00:15:56.533 "compare_and_write": false, 00:15:56.533 "abort": false, 00:15:56.533 "seek_hole": false, 00:15:56.533 "seek_data": false, 00:15:56.533 "copy": false, 00:15:56.533 "nvme_iov_md": false 00:15:56.533 }, 00:15:56.533 "memory_domains": [ 00:15:56.533 { 00:15:56.533 "dma_device_id": "system", 00:15:56.533 "dma_device_type": 1 00:15:56.533 }, 00:15:56.533 { 00:15:56.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.533 "dma_device_type": 2 00:15:56.533 }, 00:15:56.533 { 00:15:56.533 "dma_device_id": "system", 00:15:56.533 "dma_device_type": 1 00:15:56.533 }, 00:15:56.533 { 00:15:56.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.533 "dma_device_type": 2 00:15:56.534 } 00:15:56.534 ], 00:15:56.534 "driver_specific": { 00:15:56.534 "raid": { 00:15:56.534 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:15:56.534 "strip_size_kb": 0, 00:15:56.534 "state": "online", 00:15:56.534 "raid_level": "raid1", 00:15:56.534 "superblock": true, 00:15:56.534 "num_base_bdevs": 2, 00:15:56.534 "num_base_bdevs_discovered": 2, 00:15:56.534 "num_base_bdevs_operational": 2, 00:15:56.534 "base_bdevs_list": [ 00:15:56.534 { 00:15:56.534 "name": "pt1", 00:15:56.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.534 "is_configured": true, 00:15:56.534 "data_offset": 2048, 00:15:56.534 "data_size": 63488 00:15:56.534 }, 00:15:56.534 { 00:15:56.534 "name": "pt2", 00:15:56.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.534 "is_configured": true, 00:15:56.534 "data_offset": 2048, 00:15:56.534 "data_size": 63488 00:15:56.534 } 00:15:56.534 ] 00:15:56.534 } 00:15:56.534 } 00:15:56.534 }' 00:15:56.534 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.793 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:56.793 pt2' 00:15:56.793 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.793 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:56.793 02:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.051 "name": "pt1", 00:15:57.051 "aliases": [ 00:15:57.051 "00000000-0000-0000-0000-000000000001" 00:15:57.051 ], 00:15:57.051 "product_name": "passthru", 00:15:57.051 "block_size": 512, 00:15:57.051 "num_blocks": 65536, 00:15:57.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.051 "assigned_rate_limits": { 00:15:57.051 "rw_ios_per_sec": 0, 00:15:57.051 "rw_mbytes_per_sec": 0, 00:15:57.051 "r_mbytes_per_sec": 0, 00:15:57.051 "w_mbytes_per_sec": 0 00:15:57.051 }, 00:15:57.051 "claimed": true, 00:15:57.051 "claim_type": "exclusive_write", 00:15:57.051 "zoned": false, 00:15:57.051 "supported_io_types": { 00:15:57.051 "read": true, 00:15:57.051 "write": true, 00:15:57.051 "unmap": true, 00:15:57.051 "flush": true, 00:15:57.051 "reset": true, 00:15:57.051 "nvme_admin": false, 00:15:57.051 "nvme_io": false, 00:15:57.051 "nvme_io_md": false, 00:15:57.051 "write_zeroes": true, 00:15:57.051 "zcopy": true, 00:15:57.051 "get_zone_info": false, 00:15:57.051 "zone_management": false, 00:15:57.051 "zone_append": false, 00:15:57.051 "compare": false, 00:15:57.051 "compare_and_write": false, 00:15:57.051 "abort": true, 00:15:57.051 "seek_hole": false, 00:15:57.051 "seek_data": false, 00:15:57.051 "copy": true, 00:15:57.051 "nvme_iov_md": false 00:15:57.051 }, 00:15:57.051 "memory_domains": [ 00:15:57.051 { 00:15:57.051 "dma_device_id": "system", 00:15:57.051 "dma_device_type": 1 00:15:57.051 }, 00:15:57.051 { 00:15:57.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.051 "dma_device_type": 2 00:15:57.051 } 00:15:57.051 ], 00:15:57.051 "driver_specific": { 00:15:57.051 "passthru": { 00:15:57.051 "name": "pt1", 00:15:57.051 "base_bdev_name": "malloc1" 00:15:57.051 } 00:15:57.051 } 00:15:57.051 }' 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.051 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.309 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.309 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.309 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.309 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.309 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.309 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.309 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.568 "name": "pt2", 00:15:57.568 "aliases": [ 00:15:57.568 "00000000-0000-0000-0000-000000000002" 00:15:57.568 ], 00:15:57.568 "product_name": "passthru", 00:15:57.568 "block_size": 512, 00:15:57.568 "num_blocks": 65536, 00:15:57.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.568 "assigned_rate_limits": { 00:15:57.568 "rw_ios_per_sec": 0, 00:15:57.568 "rw_mbytes_per_sec": 0, 00:15:57.568 "r_mbytes_per_sec": 0, 00:15:57.568 "w_mbytes_per_sec": 0 00:15:57.568 }, 00:15:57.568 "claimed": true, 00:15:57.568 "claim_type": "exclusive_write", 00:15:57.568 "zoned": false, 00:15:57.568 "supported_io_types": { 00:15:57.568 "read": true, 00:15:57.568 "write": true, 00:15:57.568 "unmap": true, 00:15:57.568 "flush": true, 00:15:57.568 "reset": true, 00:15:57.568 "nvme_admin": false, 00:15:57.568 "nvme_io": false, 00:15:57.568 "nvme_io_md": false, 00:15:57.568 "write_zeroes": true, 00:15:57.568 "zcopy": true, 00:15:57.568 "get_zone_info": false, 00:15:57.568 "zone_management": false, 00:15:57.568 "zone_append": false, 00:15:57.568 "compare": false, 00:15:57.568 "compare_and_write": false, 00:15:57.568 "abort": true, 00:15:57.568 "seek_hole": false, 00:15:57.568 "seek_data": false, 00:15:57.568 "copy": true, 00:15:57.568 "nvme_iov_md": false 00:15:57.568 }, 00:15:57.568 "memory_domains": [ 00:15:57.568 { 00:15:57.568 "dma_device_id": "system", 00:15:57.568 "dma_device_type": 1 00:15:57.568 }, 00:15:57.568 { 00:15:57.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.568 "dma_device_type": 2 00:15:57.568 } 00:15:57.568 ], 00:15:57.568 "driver_specific": { 00:15:57.568 "passthru": { 00:15:57.568 "name": "pt2", 00:15:57.568 "base_bdev_name": "malloc2" 00:15:57.568 } 00:15:57.568 } 00:15:57.568 }' 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.568 02:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.826 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.826 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.826 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.826 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.826 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:57.826 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:58.084 [2024-07-11 02:21:48.250533] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.084 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8b9e86af-47af-4241-a850-455955f01489 00:15:58.084 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8b9e86af-47af-4241-a850-455955f01489 ']' 00:15:58.084 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:58.084 [2024-07-11 02:21:48.502944] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.084 [2024-07-11 02:21:48.502970] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.084 [2024-07-11 02:21:48.503029] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.084 [2024-07-11 02:21:48.503082] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.084 [2024-07-11 02:21:48.503094] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23ba9e0 name raid_bdev1, state offline 00:15:58.342 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.342 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:58.601 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:58.601 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:58.601 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.601 02:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:58.859 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.859 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:58.859 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:58.859 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:59.117 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.376 [2024-07-11 02:21:49.742155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.376 [2024-07-11 02:21:49.743460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.376 [2024-07-11 02:21:49.743518] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:59.376 [2024-07-11 02:21:49.743558] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:59.376 [2024-07-11 02:21:49.743577] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.376 [2024-07-11 02:21:49.743587] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b1bd0 name raid_bdev1, state configuring 00:15:59.376 request: 00:15:59.376 { 00:15:59.376 "name": "raid_bdev1", 00:15:59.376 "raid_level": "raid1", 00:15:59.376 "base_bdevs": [ 00:15:59.376 "malloc1", 00:15:59.376 "malloc2" 00:15:59.376 ], 00:15:59.376 "superblock": false, 00:15:59.376 "method": "bdev_raid_create", 00:15:59.376 "req_id": 1 00:15:59.376 } 00:15:59.376 Got JSON-RPC error response 00:15:59.376 response: 00:15:59.376 { 00:15:59.376 "code": -17, 00:15:59.376 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.376 } 00:15:59.376 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:59.376 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:59.376 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:59.376 02:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:59.376 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.376 02:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:59.634 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:59.634 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:59.634 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.893 [2024-07-11 02:21:50.239424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.893 [2024-07-11 02:21:50.239482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.893 [2024-07-11 02:21:50.239510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b9f10 00:15:59.893 [2024-07-11 02:21:50.239523] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.893 [2024-07-11 02:21:50.241095] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.893 [2024-07-11 02:21:50.241126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.893 [2024-07-11 02:21:50.241195] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.893 [2024-07-11 02:21:50.241221] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.893 pt1 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.893 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.151 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.151 "name": "raid_bdev1", 00:16:00.151 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:16:00.151 "strip_size_kb": 0, 00:16:00.151 "state": "configuring", 00:16:00.151 "raid_level": "raid1", 00:16:00.151 "superblock": true, 00:16:00.151 "num_base_bdevs": 2, 00:16:00.152 "num_base_bdevs_discovered": 1, 00:16:00.152 "num_base_bdevs_operational": 2, 00:16:00.152 "base_bdevs_list": [ 00:16:00.152 { 00:16:00.152 "name": "pt1", 00:16:00.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.152 "is_configured": true, 00:16:00.152 "data_offset": 2048, 00:16:00.152 "data_size": 63488 00:16:00.152 }, 00:16:00.152 { 00:16:00.152 "name": null, 00:16:00.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.152 "is_configured": false, 00:16:00.152 "data_offset": 2048, 00:16:00.152 "data_size": 63488 00:16:00.152 } 00:16:00.152 ] 00:16:00.152 }' 00:16:00.152 02:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.152 02:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.719 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:00.719 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:00.719 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:00.719 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.979 [2024-07-11 02:21:51.370447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.979 [2024-07-11 02:21:51.370503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.979 [2024-07-11 02:21:51.370522] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b5410 00:16:00.979 [2024-07-11 02:21:51.370535] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.979 [2024-07-11 02:21:51.370890] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.979 [2024-07-11 02:21:51.370910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.979 [2024-07-11 02:21:51.370974] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.979 [2024-07-11 02:21:51.370999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.979 [2024-07-11 02:21:51.371095] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2205900 00:16:00.979 [2024-07-11 02:21:51.371106] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.979 [2024-07-11 02:21:51.371270] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x229e360 00:16:00.979 [2024-07-11 02:21:51.371393] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2205900 00:16:00.979 [2024-07-11 02:21:51.371403] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2205900 00:16:00.979 [2024-07-11 02:21:51.371502] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.979 pt2 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.979 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.238 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.238 "name": "raid_bdev1", 00:16:01.238 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:16:01.238 "strip_size_kb": 0, 00:16:01.238 "state": "online", 00:16:01.238 "raid_level": "raid1", 00:16:01.238 "superblock": true, 00:16:01.238 "num_base_bdevs": 2, 00:16:01.238 "num_base_bdevs_discovered": 2, 00:16:01.238 "num_base_bdevs_operational": 2, 00:16:01.238 "base_bdevs_list": [ 00:16:01.238 { 00:16:01.238 "name": "pt1", 00:16:01.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.238 "is_configured": true, 00:16:01.238 "data_offset": 2048, 00:16:01.238 "data_size": 63488 00:16:01.238 }, 00:16:01.238 { 00:16:01.238 "name": "pt2", 00:16:01.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.238 "is_configured": true, 00:16:01.238 "data_offset": 2048, 00:16:01.238 "data_size": 63488 00:16:01.238 } 00:16:01.238 ] 00:16:01.238 }' 00:16:01.238 02:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.238 02:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:01.807 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:02.065 [2024-07-11 02:21:52.445566] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.065 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:02.065 "name": "raid_bdev1", 00:16:02.065 "aliases": [ 00:16:02.065 "8b9e86af-47af-4241-a850-455955f01489" 00:16:02.066 ], 00:16:02.066 "product_name": "Raid Volume", 00:16:02.066 "block_size": 512, 00:16:02.066 "num_blocks": 63488, 00:16:02.066 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:16:02.066 "assigned_rate_limits": { 00:16:02.066 "rw_ios_per_sec": 0, 00:16:02.066 "rw_mbytes_per_sec": 0, 00:16:02.066 "r_mbytes_per_sec": 0, 00:16:02.066 "w_mbytes_per_sec": 0 00:16:02.066 }, 00:16:02.066 "claimed": false, 00:16:02.066 "zoned": false, 00:16:02.066 "supported_io_types": { 00:16:02.066 "read": true, 00:16:02.066 "write": true, 00:16:02.066 "unmap": false, 00:16:02.066 "flush": false, 00:16:02.066 "reset": true, 00:16:02.066 "nvme_admin": false, 00:16:02.066 "nvme_io": false, 00:16:02.066 "nvme_io_md": false, 00:16:02.066 "write_zeroes": true, 00:16:02.066 "zcopy": false, 00:16:02.066 "get_zone_info": false, 00:16:02.066 "zone_management": false, 00:16:02.066 "zone_append": false, 00:16:02.066 "compare": false, 00:16:02.066 "compare_and_write": false, 00:16:02.066 "abort": false, 00:16:02.066 "seek_hole": false, 00:16:02.066 "seek_data": false, 00:16:02.066 "copy": false, 00:16:02.066 "nvme_iov_md": false 00:16:02.066 }, 00:16:02.066 "memory_domains": [ 00:16:02.066 { 00:16:02.066 "dma_device_id": "system", 00:16:02.066 "dma_device_type": 1 00:16:02.066 }, 00:16:02.066 { 00:16:02.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.066 "dma_device_type": 2 00:16:02.066 }, 00:16:02.066 { 00:16:02.066 "dma_device_id": "system", 00:16:02.066 "dma_device_type": 1 00:16:02.066 }, 00:16:02.066 { 00:16:02.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.066 "dma_device_type": 2 00:16:02.066 } 00:16:02.066 ], 00:16:02.066 "driver_specific": { 00:16:02.066 "raid": { 00:16:02.066 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:16:02.066 "strip_size_kb": 0, 00:16:02.066 "state": "online", 00:16:02.066 "raid_level": "raid1", 00:16:02.066 "superblock": true, 00:16:02.066 "num_base_bdevs": 2, 00:16:02.066 "num_base_bdevs_discovered": 2, 00:16:02.066 "num_base_bdevs_operational": 2, 00:16:02.066 "base_bdevs_list": [ 00:16:02.066 { 00:16:02.066 "name": "pt1", 00:16:02.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.066 "is_configured": true, 00:16:02.066 "data_offset": 2048, 00:16:02.066 "data_size": 63488 00:16:02.066 }, 00:16:02.066 { 00:16:02.066 "name": "pt2", 00:16:02.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.066 "is_configured": true, 00:16:02.066 "data_offset": 2048, 00:16:02.066 "data_size": 63488 00:16:02.066 } 00:16:02.066 ] 00:16:02.066 } 00:16:02.066 } 00:16:02.066 }' 00:16:02.066 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.334 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:02.334 pt2' 00:16:02.334 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.335 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:02.335 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.595 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.595 "name": "pt1", 00:16:02.595 "aliases": [ 00:16:02.595 "00000000-0000-0000-0000-000000000001" 00:16:02.595 ], 00:16:02.595 "product_name": "passthru", 00:16:02.595 "block_size": 512, 00:16:02.595 "num_blocks": 65536, 00:16:02.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.595 "assigned_rate_limits": { 00:16:02.595 "rw_ios_per_sec": 0, 00:16:02.595 "rw_mbytes_per_sec": 0, 00:16:02.595 "r_mbytes_per_sec": 0, 00:16:02.595 "w_mbytes_per_sec": 0 00:16:02.595 }, 00:16:02.595 "claimed": true, 00:16:02.595 "claim_type": "exclusive_write", 00:16:02.595 "zoned": false, 00:16:02.595 "supported_io_types": { 00:16:02.595 "read": true, 00:16:02.595 "write": true, 00:16:02.595 "unmap": true, 00:16:02.595 "flush": true, 00:16:02.595 "reset": true, 00:16:02.595 "nvme_admin": false, 00:16:02.595 "nvme_io": false, 00:16:02.595 "nvme_io_md": false, 00:16:02.595 "write_zeroes": true, 00:16:02.595 "zcopy": true, 00:16:02.595 "get_zone_info": false, 00:16:02.595 "zone_management": false, 00:16:02.595 "zone_append": false, 00:16:02.595 "compare": false, 00:16:02.595 "compare_and_write": false, 00:16:02.595 "abort": true, 00:16:02.595 "seek_hole": false, 00:16:02.595 "seek_data": false, 00:16:02.595 "copy": true, 00:16:02.596 "nvme_iov_md": false 00:16:02.596 }, 00:16:02.596 "memory_domains": [ 00:16:02.596 { 00:16:02.596 "dma_device_id": "system", 00:16:02.596 "dma_device_type": 1 00:16:02.596 }, 00:16:02.596 { 00:16:02.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.596 "dma_device_type": 2 00:16:02.596 } 00:16:02.596 ], 00:16:02.596 "driver_specific": { 00:16:02.596 "passthru": { 00:16:02.596 "name": "pt1", 00:16:02.596 "base_bdev_name": "malloc1" 00:16:02.596 } 00:16:02.596 } 00:16:02.596 }' 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.596 02:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.854 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:02.854 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.854 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.854 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.854 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.854 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:02.854 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:03.113 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:03.113 "name": "pt2", 00:16:03.113 "aliases": [ 00:16:03.113 "00000000-0000-0000-0000-000000000002" 00:16:03.113 ], 00:16:03.113 "product_name": "passthru", 00:16:03.113 "block_size": 512, 00:16:03.113 "num_blocks": 65536, 00:16:03.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.113 "assigned_rate_limits": { 00:16:03.113 "rw_ios_per_sec": 0, 00:16:03.113 "rw_mbytes_per_sec": 0, 00:16:03.113 "r_mbytes_per_sec": 0, 00:16:03.113 "w_mbytes_per_sec": 0 00:16:03.113 }, 00:16:03.113 "claimed": true, 00:16:03.113 "claim_type": "exclusive_write", 00:16:03.113 "zoned": false, 00:16:03.113 "supported_io_types": { 00:16:03.113 "read": true, 00:16:03.113 "write": true, 00:16:03.113 "unmap": true, 00:16:03.113 "flush": true, 00:16:03.113 "reset": true, 00:16:03.113 "nvme_admin": false, 00:16:03.113 "nvme_io": false, 00:16:03.113 "nvme_io_md": false, 00:16:03.113 "write_zeroes": true, 00:16:03.113 "zcopy": true, 00:16:03.113 "get_zone_info": false, 00:16:03.113 "zone_management": false, 00:16:03.113 "zone_append": false, 00:16:03.113 "compare": false, 00:16:03.113 "compare_and_write": false, 00:16:03.113 "abort": true, 00:16:03.113 "seek_hole": false, 00:16:03.113 "seek_data": false, 00:16:03.113 "copy": true, 00:16:03.113 "nvme_iov_md": false 00:16:03.113 }, 00:16:03.113 "memory_domains": [ 00:16:03.113 { 00:16:03.113 "dma_device_id": "system", 00:16:03.113 "dma_device_type": 1 00:16:03.113 }, 00:16:03.113 { 00:16:03.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.113 "dma_device_type": 2 00:16:03.113 } 00:16:03.113 ], 00:16:03.113 "driver_specific": { 00:16:03.113 "passthru": { 00:16:03.113 "name": "pt2", 00:16:03.113 "base_bdev_name": "malloc2" 00:16:03.113 } 00:16:03.113 } 00:16:03.113 }' 00:16:03.113 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.113 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.113 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:03.113 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.113 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:03.372 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:03.631 [2024-07-11 02:21:53.969613] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.631 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8b9e86af-47af-4241-a850-455955f01489 '!=' 8b9e86af-47af-4241-a850-455955f01489 ']' 00:16:03.631 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:03.631 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:03.631 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:03.631 02:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:03.890 [2024-07-11 02:21:54.222046] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.890 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.149 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.149 "name": "raid_bdev1", 00:16:04.149 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:16:04.149 "strip_size_kb": 0, 00:16:04.149 "state": "online", 00:16:04.149 "raid_level": "raid1", 00:16:04.149 "superblock": true, 00:16:04.149 "num_base_bdevs": 2, 00:16:04.149 "num_base_bdevs_discovered": 1, 00:16:04.149 "num_base_bdevs_operational": 1, 00:16:04.149 "base_bdevs_list": [ 00:16:04.149 { 00:16:04.149 "name": null, 00:16:04.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.149 "is_configured": false, 00:16:04.149 "data_offset": 2048, 00:16:04.149 "data_size": 63488 00:16:04.149 }, 00:16:04.149 { 00:16:04.149 "name": "pt2", 00:16:04.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.149 "is_configured": true, 00:16:04.149 "data_offset": 2048, 00:16:04.149 "data_size": 63488 00:16:04.149 } 00:16:04.149 ] 00:16:04.149 }' 00:16:04.149 02:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.149 02:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:04.977 [2024-07-11 02:21:55.312920] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.977 [2024-07-11 02:21:55.312950] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.977 [2024-07-11 02:21:55.313006] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.977 [2024-07-11 02:21:55.313051] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.977 [2024-07-11 02:21:55.313062] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2205900 name raid_bdev1, state offline 00:16:04.977 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.977 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:05.236 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:05.236 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:05.236 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:05.236 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:05.236 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:05.495 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:05.495 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:05.495 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:05.495 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:05.496 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:16:05.496 02:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:05.754 [2024-07-11 02:21:56.074888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:05.754 [2024-07-11 02:21:56.074933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.754 [2024-07-11 02:21:56.074950] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b4f90 00:16:05.754 [2024-07-11 02:21:56.074962] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.754 [2024-07-11 02:21:56.076501] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.754 [2024-07-11 02:21:56.076528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:05.754 [2024-07-11 02:21:56.076588] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:05.754 [2024-07-11 02:21:56.076612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.754 [2024-07-11 02:21:56.076691] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2206040 00:16:05.754 [2024-07-11 02:21:56.076702] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:05.754 [2024-07-11 02:21:56.076871] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23b3de0 00:16:05.755 [2024-07-11 02:21:56.076992] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2206040 00:16:05.755 [2024-07-11 02:21:56.077002] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2206040 00:16:05.755 [2024-07-11 02:21:56.077092] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.755 pt2 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.755 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.014 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.014 "name": "raid_bdev1", 00:16:06.014 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:16:06.014 "strip_size_kb": 0, 00:16:06.014 "state": "online", 00:16:06.014 "raid_level": "raid1", 00:16:06.014 "superblock": true, 00:16:06.014 "num_base_bdevs": 2, 00:16:06.014 "num_base_bdevs_discovered": 1, 00:16:06.014 "num_base_bdevs_operational": 1, 00:16:06.014 "base_bdevs_list": [ 00:16:06.014 { 00:16:06.014 "name": null, 00:16:06.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.014 "is_configured": false, 00:16:06.014 "data_offset": 2048, 00:16:06.014 "data_size": 63488 00:16:06.014 }, 00:16:06.014 { 00:16:06.014 "name": "pt2", 00:16:06.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.014 "is_configured": true, 00:16:06.014 "data_offset": 2048, 00:16:06.014 "data_size": 63488 00:16:06.014 } 00:16:06.014 ] 00:16:06.014 }' 00:16:06.014 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.014 02:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 02:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:06.847 [2024-07-11 02:21:57.097588] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.847 [2024-07-11 02:21:57.097616] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.847 [2024-07-11 02:21:57.097669] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.847 [2024-07-11 02:21:57.097710] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.847 [2024-07-11 02:21:57.097722] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2206040 name raid_bdev1, state offline 00:16:06.847 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.847 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:07.106 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:07.106 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:07.106 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:16:07.106 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.365 [2024-07-11 02:21:57.610907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.365 [2024-07-11 02:21:57.610953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.365 [2024-07-11 02:21:57.610969] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b51c0 00:16:07.365 [2024-07-11 02:21:57.610982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.365 [2024-07-11 02:21:57.612520] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.365 [2024-07-11 02:21:57.612549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.365 [2024-07-11 02:21:57.612610] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.365 [2024-07-11 02:21:57.612634] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.365 [2024-07-11 02:21:57.612726] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:07.365 [2024-07-11 02:21:57.612739] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.365 [2024-07-11 02:21:57.612751] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b3a30 name raid_bdev1, state configuring 00:16:07.365 [2024-07-11 02:21:57.612780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.365 [2024-07-11 02:21:57.612832] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23b3cb0 00:16:07.365 [2024-07-11 02:21:57.612842] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:07.365 [2024-07-11 02:21:57.613000] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23b07d0 00:16:07.365 [2024-07-11 02:21:57.613119] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23b3cb0 00:16:07.365 [2024-07-11 02:21:57.613129] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23b3cb0 00:16:07.365 [2024-07-11 02:21:57.613228] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.365 pt1 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.365 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.624 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.624 "name": "raid_bdev1", 00:16:07.624 "uuid": "8b9e86af-47af-4241-a850-455955f01489", 00:16:07.624 "strip_size_kb": 0, 00:16:07.624 "state": "online", 00:16:07.624 "raid_level": "raid1", 00:16:07.624 "superblock": true, 00:16:07.624 "num_base_bdevs": 2, 00:16:07.624 "num_base_bdevs_discovered": 1, 00:16:07.624 "num_base_bdevs_operational": 1, 00:16:07.624 "base_bdevs_list": [ 00:16:07.624 { 00:16:07.624 "name": null, 00:16:07.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.624 "is_configured": false, 00:16:07.624 "data_offset": 2048, 00:16:07.624 "data_size": 63488 00:16:07.624 }, 00:16:07.624 { 00:16:07.624 "name": "pt2", 00:16:07.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.624 "is_configured": true, 00:16:07.624 "data_offset": 2048, 00:16:07.624 "data_size": 63488 00:16:07.624 } 00:16:07.624 ] 00:16:07.624 }' 00:16:07.624 02:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.624 02:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.191 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:08.191 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:08.450 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:08.450 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:08.450 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:08.710 [2024-07-11 02:21:58.958719] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.710 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 8b9e86af-47af-4241-a850-455955f01489 '!=' 8b9e86af-47af-4241-a850-455955f01489 ']' 00:16:08.710 02:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1908756 00:16:08.711 02:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1908756 ']' 00:16:08.711 02:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1908756 00:16:08.711 02:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:08.711 02:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:08.711 02:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1908756 00:16:08.711 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:08.711 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:08.711 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1908756' 00:16:08.711 killing process with pid 1908756 00:16:08.711 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1908756 00:16:08.711 [2024-07-11 02:21:59.032647] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.711 [2024-07-11 02:21:59.032702] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.711 [2024-07-11 02:21:59.032746] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.711 [2024-07-11 02:21:59.032764] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b3cb0 name raid_bdev1, state offline 00:16:08.711 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1908756 00:16:08.711 [2024-07-11 02:21:59.050256] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.971 02:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:08.971 00:16:08.971 real 0m16.423s 00:16:08.971 user 0m29.736s 00:16:08.971 sys 0m3.044s 00:16:08.971 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.971 02:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.971 ************************************ 00:16:08.971 END TEST raid_superblock_test 00:16:08.971 ************************************ 00:16:08.971 02:21:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:08.971 02:21:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:08.971 02:21:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:08.971 02:21:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.971 02:21:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.971 ************************************ 00:16:08.971 START TEST raid_read_error_test 00:16:08.971 ************************************ 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.1aF43EzTyg 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1911301 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1911301 /var/tmp/spdk-raid.sock 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1911301 ']' 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.971 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.231 [2024-07-11 02:21:59.403528] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:16:09.231 [2024-07-11 02:21:59.403587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911301 ] 00:16:09.231 [2024-07-11 02:21:59.525141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.231 [2024-07-11 02:21:59.577586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.231 [2024-07-11 02:21:59.637485] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.231 [2024-07-11 02:21:59.637516] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.490 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.490 02:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:09.490 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:09.490 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.749 BaseBdev1_malloc 00:16:09.749 02:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:10.007 true 00:16:10.007 02:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:10.007 [2024-07-11 02:22:00.420647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:10.007 [2024-07-11 02:22:00.420698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.007 [2024-07-11 02:22:00.420718] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa7f330 00:16:10.007 [2024-07-11 02:22:00.420730] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.007 [2024-07-11 02:22:00.422567] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.007 [2024-07-11 02:22:00.422598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.007 BaseBdev1 00:16:10.266 02:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:10.266 02:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.266 BaseBdev2_malloc 00:16:10.525 02:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:10.525 true 00:16:10.525 02:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:10.785 [2024-07-11 02:22:01.054915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:10.785 [2024-07-11 02:22:01.054959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.785 [2024-07-11 02:22:01.054978] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa78b40 00:16:10.785 [2024-07-11 02:22:01.054991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.785 [2024-07-11 02:22:01.056380] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.785 [2024-07-11 02:22:01.056409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.785 BaseBdev2 00:16:10.785 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:11.049 [2024-07-11 02:22:01.303603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.049 [2024-07-11 02:22:01.304940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.049 [2024-07-11 02:22:01.305128] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa79d30 00:16:11.049 [2024-07-11 02:22:01.305142] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:11.049 [2024-07-11 02:22:01.305335] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8cb390 00:16:11.049 [2024-07-11 02:22:01.305484] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa79d30 00:16:11.049 [2024-07-11 02:22:01.305494] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa79d30 00:16:11.049 [2024-07-11 02:22:01.305596] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.049 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.385 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.385 "name": "raid_bdev1", 00:16:11.385 "uuid": "f640c469-4e19-429d-aee4-c21ed759b568", 00:16:11.385 "strip_size_kb": 0, 00:16:11.385 "state": "online", 00:16:11.385 "raid_level": "raid1", 00:16:11.385 "superblock": true, 00:16:11.385 "num_base_bdevs": 2, 00:16:11.385 "num_base_bdevs_discovered": 2, 00:16:11.385 "num_base_bdevs_operational": 2, 00:16:11.385 "base_bdevs_list": [ 00:16:11.385 { 00:16:11.385 "name": "BaseBdev1", 00:16:11.385 "uuid": "5028da0d-c1ca-5c7d-8ae0-9de2b3726528", 00:16:11.385 "is_configured": true, 00:16:11.385 "data_offset": 2048, 00:16:11.385 "data_size": 63488 00:16:11.385 }, 00:16:11.385 { 00:16:11.385 "name": "BaseBdev2", 00:16:11.385 "uuid": "40200a09-0e86-5c8e-8d8c-af799b8fbfdf", 00:16:11.385 "is_configured": true, 00:16:11.385 "data_offset": 2048, 00:16:11.385 "data_size": 63488 00:16:11.385 } 00:16:11.385 ] 00:16:11.385 }' 00:16:11.385 02:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.385 02:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.953 02:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:11.953 02:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:11.953 [2024-07-11 02:22:02.282431] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa79960 00:16:12.891 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.151 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.409 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.409 "name": "raid_bdev1", 00:16:13.409 "uuid": "f640c469-4e19-429d-aee4-c21ed759b568", 00:16:13.409 "strip_size_kb": 0, 00:16:13.409 "state": "online", 00:16:13.409 "raid_level": "raid1", 00:16:13.409 "superblock": true, 00:16:13.409 "num_base_bdevs": 2, 00:16:13.409 "num_base_bdevs_discovered": 2, 00:16:13.409 "num_base_bdevs_operational": 2, 00:16:13.409 "base_bdevs_list": [ 00:16:13.409 { 00:16:13.409 "name": "BaseBdev1", 00:16:13.409 "uuid": "5028da0d-c1ca-5c7d-8ae0-9de2b3726528", 00:16:13.409 "is_configured": true, 00:16:13.409 "data_offset": 2048, 00:16:13.409 "data_size": 63488 00:16:13.409 }, 00:16:13.409 { 00:16:13.409 "name": "BaseBdev2", 00:16:13.409 "uuid": "40200a09-0e86-5c8e-8d8c-af799b8fbfdf", 00:16:13.409 "is_configured": true, 00:16:13.409 "data_offset": 2048, 00:16:13.409 "data_size": 63488 00:16:13.409 } 00:16:13.409 ] 00:16:13.409 }' 00:16:13.409 02:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.409 02:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.976 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:14.235 [2024-07-11 02:22:04.433099] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.235 [2024-07-11 02:22:04.433141] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.235 [2024-07-11 02:22:04.436292] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.235 [2024-07-11 02:22:04.436324] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.235 [2024-07-11 02:22:04.436397] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.235 [2024-07-11 02:22:04.436409] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa79d30 name raid_bdev1, state offline 00:16:14.235 0 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1911301 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1911301 ']' 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1911301 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1911301 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1911301' 00:16:14.235 killing process with pid 1911301 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1911301 00:16:14.235 [2024-07-11 02:22:04.517574] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.235 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1911301 00:16:14.235 [2024-07-11 02:22:04.528339] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.1aF43EzTyg 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:14.494 00:16:14.494 real 0m5.416s 00:16:14.494 user 0m8.569s 00:16:14.494 sys 0m1.081s 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.494 02:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.494 ************************************ 00:16:14.494 END TEST raid_read_error_test 00:16:14.494 ************************************ 00:16:14.494 02:22:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:14.494 02:22:04 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:14.494 02:22:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:14.494 02:22:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.494 02:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.494 ************************************ 00:16:14.494 START TEST raid_write_error_test 00:16:14.494 ************************************ 00:16:14.494 02:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.cPyCNxy2mH 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1912357 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1912357 /var/tmp/spdk-raid.sock 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1912357 ']' 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:14.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.495 02:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.495 [2024-07-11 02:22:04.913914] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:16:14.495 [2024-07-11 02:22:04.913979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912357 ] 00:16:14.754 [2024-07-11 02:22:05.051105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.754 [2024-07-11 02:22:05.103470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.754 [2024-07-11 02:22:05.167819] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.754 [2024-07-11 02:22:05.167853] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.013 02:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.013 02:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:15.013 02:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:15.013 02:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:15.272 BaseBdev1_malloc 00:16:15.272 02:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:15.530 true 00:16:15.530 02:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:15.789 [2024-07-11 02:22:06.085561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:15.789 [2024-07-11 02:22:06.085606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.789 [2024-07-11 02:22:06.085628] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17b4330 00:16:15.789 [2024-07-11 02:22:06.085645] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.789 [2024-07-11 02:22:06.087574] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.789 [2024-07-11 02:22:06.087607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.789 BaseBdev1 00:16:15.789 02:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:15.789 02:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:16.048 BaseBdev2_malloc 00:16:16.048 02:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:16.306 true 00:16:16.306 02:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:16.564 [2024-07-11 02:22:06.821164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:16.564 [2024-07-11 02:22:06.821207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.564 [2024-07-11 02:22:06.821228] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17adb40 00:16:16.564 [2024-07-11 02:22:06.821241] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.564 [2024-07-11 02:22:06.822819] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.564 [2024-07-11 02:22:06.822849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:16.564 BaseBdev2 00:16:16.564 02:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:16.832 [2024-07-11 02:22:07.065838] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.833 [2024-07-11 02:22:07.067037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.833 [2024-07-11 02:22:07.067221] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17aed30 00:16:16.833 [2024-07-11 02:22:07.067234] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:16.833 [2024-07-11 02:22:07.067408] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1600390 00:16:16.833 [2024-07-11 02:22:07.067552] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17aed30 00:16:16.833 [2024-07-11 02:22:07.067562] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17aed30 00:16:16.833 [2024-07-11 02:22:07.067661] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.833 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.095 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.095 "name": "raid_bdev1", 00:16:17.095 "uuid": "18ba6acc-d939-4d51-a93a-365adfa491c1", 00:16:17.095 "strip_size_kb": 0, 00:16:17.095 "state": "online", 00:16:17.095 "raid_level": "raid1", 00:16:17.095 "superblock": true, 00:16:17.095 "num_base_bdevs": 2, 00:16:17.095 "num_base_bdevs_discovered": 2, 00:16:17.095 "num_base_bdevs_operational": 2, 00:16:17.095 "base_bdevs_list": [ 00:16:17.095 { 00:16:17.095 "name": "BaseBdev1", 00:16:17.095 "uuid": "1412e775-78e7-5bda-8917-474b5aabcd3b", 00:16:17.095 "is_configured": true, 00:16:17.095 "data_offset": 2048, 00:16:17.095 "data_size": 63488 00:16:17.095 }, 00:16:17.095 { 00:16:17.095 "name": "BaseBdev2", 00:16:17.095 "uuid": "9fb1ed0a-73d0-5053-b1a2-8e4ce1a69c94", 00:16:17.095 "is_configured": true, 00:16:17.095 "data_offset": 2048, 00:16:17.095 "data_size": 63488 00:16:17.095 } 00:16:17.095 ] 00:16:17.095 }' 00:16:17.095 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.095 02:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.663 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:17.663 02:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:17.663 [2024-07-11 02:22:08.040708] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17ae960 00:16:18.601 02:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:18.859 [2024-07-11 02:22:09.168816] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:18.859 [2024-07-11 02:22:09.168867] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.859 [2024-07-11 02:22:09.169043] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x17ae960 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.859 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.118 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.118 "name": "raid_bdev1", 00:16:19.118 "uuid": "18ba6acc-d939-4d51-a93a-365adfa491c1", 00:16:19.118 "strip_size_kb": 0, 00:16:19.118 "state": "online", 00:16:19.118 "raid_level": "raid1", 00:16:19.118 "superblock": true, 00:16:19.118 "num_base_bdevs": 2, 00:16:19.118 "num_base_bdevs_discovered": 1, 00:16:19.118 "num_base_bdevs_operational": 1, 00:16:19.118 "base_bdevs_list": [ 00:16:19.118 { 00:16:19.118 "name": null, 00:16:19.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.118 "is_configured": false, 00:16:19.118 "data_offset": 2048, 00:16:19.118 "data_size": 63488 00:16:19.118 }, 00:16:19.118 { 00:16:19.118 "name": "BaseBdev2", 00:16:19.118 "uuid": "9fb1ed0a-73d0-5053-b1a2-8e4ce1a69c94", 00:16:19.118 "is_configured": true, 00:16:19.118 "data_offset": 2048, 00:16:19.118 "data_size": 63488 00:16:19.118 } 00:16:19.118 ] 00:16:19.118 }' 00:16:19.118 02:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.118 02:22:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.685 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.944 [2024-07-11 02:22:10.256567] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.944 [2024-07-11 02:22:10.256605] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.944 [2024-07-11 02:22:10.259727] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.944 [2024-07-11 02:22:10.259754] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.944 [2024-07-11 02:22:10.259814] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.944 [2024-07-11 02:22:10.259826] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17aed30 name raid_bdev1, state offline 00:16:19.944 0 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1912357 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1912357 ']' 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1912357 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1912357 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1912357' 00:16:19.944 killing process with pid 1912357 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1912357 00:16:19.944 [2024-07-11 02:22:10.330223] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.944 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1912357 00:16:19.944 [2024-07-11 02:22:10.340696] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.cPyCNxy2mH 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:20.203 00:16:20.203 real 0m5.719s 00:16:20.203 user 0m9.296s 00:16:20.203 sys 0m1.049s 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.203 02:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 ************************************ 00:16:20.203 END TEST raid_write_error_test 00:16:20.203 ************************************ 00:16:20.203 02:22:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:20.203 02:22:10 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:20.203 02:22:10 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:20.203 02:22:10 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:20.203 02:22:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:20.203 02:22:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.203 02:22:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 ************************************ 00:16:20.462 START TEST raid_state_function_test 00:16:20.462 ************************************ 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1913196 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1913196' 00:16:20.462 Process raid pid: 1913196 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1913196 /var/tmp/spdk-raid.sock 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1913196 ']' 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:20.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.462 02:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 [2024-07-11 02:22:10.721298] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:16:20.462 [2024-07-11 02:22:10.721363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.462 [2024-07-11 02:22:10.848263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.721 [2024-07-11 02:22:10.897192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.721 [2024-07-11 02:22:10.961898] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.721 [2024-07-11 02:22:10.961929] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.288 02:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.288 02:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:21.288 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:21.856 [2024-07-11 02:22:11.977324] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.856 [2024-07-11 02:22:11.977368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.856 [2024-07-11 02:22:11.977379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.856 [2024-07-11 02:22:11.977391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.856 [2024-07-11 02:22:11.977400] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.856 [2024-07-11 02:22:11.977411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.856 02:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.856 02:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.856 02:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.856 02:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.856 "name": "Existed_Raid", 00:16:21.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.856 "strip_size_kb": 64, 00:16:21.856 "state": "configuring", 00:16:21.856 "raid_level": "raid0", 00:16:21.856 "superblock": false, 00:16:21.856 "num_base_bdevs": 3, 00:16:21.856 "num_base_bdevs_discovered": 0, 00:16:21.856 "num_base_bdevs_operational": 3, 00:16:21.856 "base_bdevs_list": [ 00:16:21.856 { 00:16:21.856 "name": "BaseBdev1", 00:16:21.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.856 "is_configured": false, 00:16:21.856 "data_offset": 0, 00:16:21.856 "data_size": 0 00:16:21.856 }, 00:16:21.856 { 00:16:21.856 "name": "BaseBdev2", 00:16:21.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.856 "is_configured": false, 00:16:21.856 "data_offset": 0, 00:16:21.856 "data_size": 0 00:16:21.856 }, 00:16:21.856 { 00:16:21.856 "name": "BaseBdev3", 00:16:21.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.856 "is_configured": false, 00:16:21.856 "data_offset": 0, 00:16:21.856 "data_size": 0 00:16:21.856 } 00:16:21.856 ] 00:16:21.856 }' 00:16:21.856 02:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.856 02:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.794 02:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:23.052 [2024-07-11 02:22:13.360826] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.052 [2024-07-11 02:22:13.360858] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9d55a0 name Existed_Raid, state configuring 00:16:23.052 02:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:23.310 [2024-07-11 02:22:13.649609] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.310 [2024-07-11 02:22:13.649639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.310 [2024-07-11 02:22:13.649649] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.310 [2024-07-11 02:22:13.649660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.310 [2024-07-11 02:22:13.649669] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.310 [2024-07-11 02:22:13.649680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.310 02:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.876 [2024-07-11 02:22:14.161923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.876 BaseBdev1 00:16:23.876 02:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:23.876 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:23.876 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:23.876 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:23.876 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:23.876 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:23.876 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.135 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.703 [ 00:16:24.703 { 00:16:24.703 "name": "BaseBdev1", 00:16:24.703 "aliases": [ 00:16:24.703 "10e90242-e855-49d9-b6de-5c934edbd587" 00:16:24.703 ], 00:16:24.703 "product_name": "Malloc disk", 00:16:24.703 "block_size": 512, 00:16:24.703 "num_blocks": 65536, 00:16:24.703 "uuid": "10e90242-e855-49d9-b6de-5c934edbd587", 00:16:24.703 "assigned_rate_limits": { 00:16:24.703 "rw_ios_per_sec": 0, 00:16:24.703 "rw_mbytes_per_sec": 0, 00:16:24.703 "r_mbytes_per_sec": 0, 00:16:24.703 "w_mbytes_per_sec": 0 00:16:24.703 }, 00:16:24.703 "claimed": true, 00:16:24.703 "claim_type": "exclusive_write", 00:16:24.703 "zoned": false, 00:16:24.703 "supported_io_types": { 00:16:24.703 "read": true, 00:16:24.704 "write": true, 00:16:24.704 "unmap": true, 00:16:24.704 "flush": true, 00:16:24.704 "reset": true, 00:16:24.704 "nvme_admin": false, 00:16:24.704 "nvme_io": false, 00:16:24.704 "nvme_io_md": false, 00:16:24.704 "write_zeroes": true, 00:16:24.704 "zcopy": true, 00:16:24.704 "get_zone_info": false, 00:16:24.704 "zone_management": false, 00:16:24.704 "zone_append": false, 00:16:24.704 "compare": false, 00:16:24.704 "compare_and_write": false, 00:16:24.704 "abort": true, 00:16:24.704 "seek_hole": false, 00:16:24.704 "seek_data": false, 00:16:24.704 "copy": true, 00:16:24.704 "nvme_iov_md": false 00:16:24.704 }, 00:16:24.704 "memory_domains": [ 00:16:24.704 { 00:16:24.704 "dma_device_id": "system", 00:16:24.704 "dma_device_type": 1 00:16:24.704 }, 00:16:24.704 { 00:16:24.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.704 "dma_device_type": 2 00:16:24.704 } 00:16:24.704 ], 00:16:24.704 "driver_specific": {} 00:16:24.704 } 00:16:24.704 ] 00:16:24.704 02:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:24.704 02:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:24.704 02:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.704 02:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.704 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.272 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.272 "name": "Existed_Raid", 00:16:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.272 "strip_size_kb": 64, 00:16:25.272 "state": "configuring", 00:16:25.272 "raid_level": "raid0", 00:16:25.272 "superblock": false, 00:16:25.272 "num_base_bdevs": 3, 00:16:25.272 "num_base_bdevs_discovered": 1, 00:16:25.272 "num_base_bdevs_operational": 3, 00:16:25.272 "base_bdevs_list": [ 00:16:25.272 { 00:16:25.272 "name": "BaseBdev1", 00:16:25.272 "uuid": "10e90242-e855-49d9-b6de-5c934edbd587", 00:16:25.272 "is_configured": true, 00:16:25.272 "data_offset": 0, 00:16:25.272 "data_size": 65536 00:16:25.272 }, 00:16:25.272 { 00:16:25.272 "name": "BaseBdev2", 00:16:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.272 "is_configured": false, 00:16:25.272 "data_offset": 0, 00:16:25.272 "data_size": 0 00:16:25.272 }, 00:16:25.272 { 00:16:25.272 "name": "BaseBdev3", 00:16:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.272 "is_configured": false, 00:16:25.272 "data_offset": 0, 00:16:25.272 "data_size": 0 00:16:25.272 } 00:16:25.272 ] 00:16:25.272 }' 00:16:25.272 02:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.272 02:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.840 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:26.099 [2024-07-11 02:22:16.351812] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.099 [2024-07-11 02:22:16.351853] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9d4ed0 name Existed_Raid, state configuring 00:16:26.099 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:26.359 [2024-07-11 02:22:16.596488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.359 [2024-07-11 02:22:16.597888] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.359 [2024-07-11 02:22:16.597919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.359 [2024-07-11 02:22:16.597930] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.359 [2024-07-11 02:22:16.597941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.359 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.618 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.618 "name": "Existed_Raid", 00:16:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.618 "strip_size_kb": 64, 00:16:26.618 "state": "configuring", 00:16:26.618 "raid_level": "raid0", 00:16:26.618 "superblock": false, 00:16:26.618 "num_base_bdevs": 3, 00:16:26.618 "num_base_bdevs_discovered": 1, 00:16:26.618 "num_base_bdevs_operational": 3, 00:16:26.618 "base_bdevs_list": [ 00:16:26.618 { 00:16:26.618 "name": "BaseBdev1", 00:16:26.618 "uuid": "10e90242-e855-49d9-b6de-5c934edbd587", 00:16:26.618 "is_configured": true, 00:16:26.618 "data_offset": 0, 00:16:26.618 "data_size": 65536 00:16:26.618 }, 00:16:26.618 { 00:16:26.618 "name": "BaseBdev2", 00:16:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.618 "is_configured": false, 00:16:26.618 "data_offset": 0, 00:16:26.618 "data_size": 0 00:16:26.618 }, 00:16:26.618 { 00:16:26.618 "name": "BaseBdev3", 00:16:26.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.618 "is_configured": false, 00:16:26.618 "data_offset": 0, 00:16:26.618 "data_size": 0 00:16:26.618 } 00:16:26.618 ] 00:16:26.618 }' 00:16:26.618 02:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.618 02:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.188 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.188 [2024-07-11 02:22:17.590491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.188 BaseBdev2 00:16:27.188 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:27.188 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:27.188 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:27.447 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:27.447 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:27.447 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:27.447 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.447 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.707 [ 00:16:27.707 { 00:16:27.707 "name": "BaseBdev2", 00:16:27.707 "aliases": [ 00:16:27.707 "84519369-6998-40c3-8e7d-7bfe0ac33393" 00:16:27.707 ], 00:16:27.707 "product_name": "Malloc disk", 00:16:27.707 "block_size": 512, 00:16:27.707 "num_blocks": 65536, 00:16:27.707 "uuid": "84519369-6998-40c3-8e7d-7bfe0ac33393", 00:16:27.707 "assigned_rate_limits": { 00:16:27.707 "rw_ios_per_sec": 0, 00:16:27.707 "rw_mbytes_per_sec": 0, 00:16:27.707 "r_mbytes_per_sec": 0, 00:16:27.707 "w_mbytes_per_sec": 0 00:16:27.707 }, 00:16:27.707 "claimed": true, 00:16:27.707 "claim_type": "exclusive_write", 00:16:27.707 "zoned": false, 00:16:27.707 "supported_io_types": { 00:16:27.707 "read": true, 00:16:27.707 "write": true, 00:16:27.707 "unmap": true, 00:16:27.707 "flush": true, 00:16:27.707 "reset": true, 00:16:27.707 "nvme_admin": false, 00:16:27.707 "nvme_io": false, 00:16:27.707 "nvme_io_md": false, 00:16:27.707 "write_zeroes": true, 00:16:27.707 "zcopy": true, 00:16:27.707 "get_zone_info": false, 00:16:27.707 "zone_management": false, 00:16:27.707 "zone_append": false, 00:16:27.707 "compare": false, 00:16:27.707 "compare_and_write": false, 00:16:27.707 "abort": true, 00:16:27.707 "seek_hole": false, 00:16:27.707 "seek_data": false, 00:16:27.707 "copy": true, 00:16:27.707 "nvme_iov_md": false 00:16:27.707 }, 00:16:27.707 "memory_domains": [ 00:16:27.707 { 00:16:27.707 "dma_device_id": "system", 00:16:27.707 "dma_device_type": 1 00:16:27.707 }, 00:16:27.707 { 00:16:27.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.707 "dma_device_type": 2 00:16:27.707 } 00:16:27.707 ], 00:16:27.707 "driver_specific": {} 00:16:27.707 } 00:16:27.707 ] 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.707 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.966 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.966 "name": "Existed_Raid", 00:16:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.966 "strip_size_kb": 64, 00:16:27.966 "state": "configuring", 00:16:27.966 "raid_level": "raid0", 00:16:27.966 "superblock": false, 00:16:27.966 "num_base_bdevs": 3, 00:16:27.966 "num_base_bdevs_discovered": 2, 00:16:27.966 "num_base_bdevs_operational": 3, 00:16:27.966 "base_bdevs_list": [ 00:16:27.966 { 00:16:27.966 "name": "BaseBdev1", 00:16:27.966 "uuid": "10e90242-e855-49d9-b6de-5c934edbd587", 00:16:27.966 "is_configured": true, 00:16:27.966 "data_offset": 0, 00:16:27.966 "data_size": 65536 00:16:27.966 }, 00:16:27.966 { 00:16:27.966 "name": "BaseBdev2", 00:16:27.966 "uuid": "84519369-6998-40c3-8e7d-7bfe0ac33393", 00:16:27.966 "is_configured": true, 00:16:27.966 "data_offset": 0, 00:16:27.966 "data_size": 65536 00:16:27.966 }, 00:16:27.966 { 00:16:27.966 "name": "BaseBdev3", 00:16:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.966 "is_configured": false, 00:16:27.966 "data_offset": 0, 00:16:27.966 "data_size": 0 00:16:27.966 } 00:16:27.966 ] 00:16:27.966 }' 00:16:27.966 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.966 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.903 [2024-07-11 02:22:19.178067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.903 [2024-07-11 02:22:19.178109] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb87cf0 00:16:28.903 [2024-07-11 02:22:19.178118] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:28.903 [2024-07-11 02:22:19.178363] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9d8cf0 00:16:28.903 [2024-07-11 02:22:19.178479] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb87cf0 00:16:28.903 [2024-07-11 02:22:19.178489] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb87cf0 00:16:28.903 [2024-07-11 02:22:19.178645] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.903 BaseBdev3 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:28.903 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.163 [ 00:16:29.163 { 00:16:29.163 "name": "BaseBdev3", 00:16:29.163 "aliases": [ 00:16:29.163 "297ebd4a-25f7-45c9-beee-82c12e064757" 00:16:29.163 ], 00:16:29.163 "product_name": "Malloc disk", 00:16:29.163 "block_size": 512, 00:16:29.163 "num_blocks": 65536, 00:16:29.163 "uuid": "297ebd4a-25f7-45c9-beee-82c12e064757", 00:16:29.163 "assigned_rate_limits": { 00:16:29.163 "rw_ios_per_sec": 0, 00:16:29.163 "rw_mbytes_per_sec": 0, 00:16:29.163 "r_mbytes_per_sec": 0, 00:16:29.163 "w_mbytes_per_sec": 0 00:16:29.163 }, 00:16:29.163 "claimed": true, 00:16:29.163 "claim_type": "exclusive_write", 00:16:29.163 "zoned": false, 00:16:29.163 "supported_io_types": { 00:16:29.163 "read": true, 00:16:29.163 "write": true, 00:16:29.163 "unmap": true, 00:16:29.163 "flush": true, 00:16:29.163 "reset": true, 00:16:29.163 "nvme_admin": false, 00:16:29.163 "nvme_io": false, 00:16:29.163 "nvme_io_md": false, 00:16:29.163 "write_zeroes": true, 00:16:29.163 "zcopy": true, 00:16:29.163 "get_zone_info": false, 00:16:29.163 "zone_management": false, 00:16:29.163 "zone_append": false, 00:16:29.163 "compare": false, 00:16:29.163 "compare_and_write": false, 00:16:29.163 "abort": true, 00:16:29.163 "seek_hole": false, 00:16:29.163 "seek_data": false, 00:16:29.163 "copy": true, 00:16:29.163 "nvme_iov_md": false 00:16:29.163 }, 00:16:29.163 "memory_domains": [ 00:16:29.163 { 00:16:29.163 "dma_device_id": "system", 00:16:29.163 "dma_device_type": 1 00:16:29.163 }, 00:16:29.163 { 00:16:29.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.163 "dma_device_type": 2 00:16:29.163 } 00:16:29.163 ], 00:16:29.163 "driver_specific": {} 00:16:29.163 } 00:16:29.163 ] 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.163 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.422 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.422 "name": "Existed_Raid", 00:16:29.422 "uuid": "2bfdded5-04c9-44d0-bc09-69aa18c0a01c", 00:16:29.422 "strip_size_kb": 64, 00:16:29.422 "state": "online", 00:16:29.422 "raid_level": "raid0", 00:16:29.422 "superblock": false, 00:16:29.422 "num_base_bdevs": 3, 00:16:29.422 "num_base_bdevs_discovered": 3, 00:16:29.422 "num_base_bdevs_operational": 3, 00:16:29.422 "base_bdevs_list": [ 00:16:29.422 { 00:16:29.422 "name": "BaseBdev1", 00:16:29.422 "uuid": "10e90242-e855-49d9-b6de-5c934edbd587", 00:16:29.422 "is_configured": true, 00:16:29.422 "data_offset": 0, 00:16:29.422 "data_size": 65536 00:16:29.422 }, 00:16:29.422 { 00:16:29.422 "name": "BaseBdev2", 00:16:29.422 "uuid": "84519369-6998-40c3-8e7d-7bfe0ac33393", 00:16:29.422 "is_configured": true, 00:16:29.422 "data_offset": 0, 00:16:29.422 "data_size": 65536 00:16:29.422 }, 00:16:29.422 { 00:16:29.422 "name": "BaseBdev3", 00:16:29.422 "uuid": "297ebd4a-25f7-45c9-beee-82c12e064757", 00:16:29.422 "is_configured": true, 00:16:29.422 "data_offset": 0, 00:16:29.422 "data_size": 65536 00:16:29.422 } 00:16:29.422 ] 00:16:29.422 }' 00:16:29.422 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.422 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:30.357 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:30.617 [2024-07-11 02:22:20.850833] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.617 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:30.617 "name": "Existed_Raid", 00:16:30.617 "aliases": [ 00:16:30.617 "2bfdded5-04c9-44d0-bc09-69aa18c0a01c" 00:16:30.617 ], 00:16:30.617 "product_name": "Raid Volume", 00:16:30.617 "block_size": 512, 00:16:30.617 "num_blocks": 196608, 00:16:30.617 "uuid": "2bfdded5-04c9-44d0-bc09-69aa18c0a01c", 00:16:30.617 "assigned_rate_limits": { 00:16:30.617 "rw_ios_per_sec": 0, 00:16:30.617 "rw_mbytes_per_sec": 0, 00:16:30.617 "r_mbytes_per_sec": 0, 00:16:30.617 "w_mbytes_per_sec": 0 00:16:30.617 }, 00:16:30.617 "claimed": false, 00:16:30.617 "zoned": false, 00:16:30.617 "supported_io_types": { 00:16:30.617 "read": true, 00:16:30.617 "write": true, 00:16:30.617 "unmap": true, 00:16:30.617 "flush": true, 00:16:30.617 "reset": true, 00:16:30.617 "nvme_admin": false, 00:16:30.617 "nvme_io": false, 00:16:30.617 "nvme_io_md": false, 00:16:30.617 "write_zeroes": true, 00:16:30.617 "zcopy": false, 00:16:30.617 "get_zone_info": false, 00:16:30.617 "zone_management": false, 00:16:30.617 "zone_append": false, 00:16:30.617 "compare": false, 00:16:30.617 "compare_and_write": false, 00:16:30.617 "abort": false, 00:16:30.617 "seek_hole": false, 00:16:30.617 "seek_data": false, 00:16:30.617 "copy": false, 00:16:30.617 "nvme_iov_md": false 00:16:30.617 }, 00:16:30.617 "memory_domains": [ 00:16:30.617 { 00:16:30.617 "dma_device_id": "system", 00:16:30.617 "dma_device_type": 1 00:16:30.617 }, 00:16:30.617 { 00:16:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.617 "dma_device_type": 2 00:16:30.617 }, 00:16:30.617 { 00:16:30.617 "dma_device_id": "system", 00:16:30.617 "dma_device_type": 1 00:16:30.617 }, 00:16:30.617 { 00:16:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.617 "dma_device_type": 2 00:16:30.617 }, 00:16:30.617 { 00:16:30.617 "dma_device_id": "system", 00:16:30.617 "dma_device_type": 1 00:16:30.617 }, 00:16:30.617 { 00:16:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.617 "dma_device_type": 2 00:16:30.617 } 00:16:30.617 ], 00:16:30.617 "driver_specific": { 00:16:30.617 "raid": { 00:16:30.617 "uuid": "2bfdded5-04c9-44d0-bc09-69aa18c0a01c", 00:16:30.617 "strip_size_kb": 64, 00:16:30.617 "state": "online", 00:16:30.617 "raid_level": "raid0", 00:16:30.617 "superblock": false, 00:16:30.617 "num_base_bdevs": 3, 00:16:30.617 "num_base_bdevs_discovered": 3, 00:16:30.617 "num_base_bdevs_operational": 3, 00:16:30.617 "base_bdevs_list": [ 00:16:30.617 { 00:16:30.617 "name": "BaseBdev1", 00:16:30.617 "uuid": "10e90242-e855-49d9-b6de-5c934edbd587", 00:16:30.617 "is_configured": true, 00:16:30.617 "data_offset": 0, 00:16:30.617 "data_size": 65536 00:16:30.617 }, 00:16:30.617 { 00:16:30.617 "name": "BaseBdev2", 00:16:30.617 "uuid": "84519369-6998-40c3-8e7d-7bfe0ac33393", 00:16:30.617 "is_configured": true, 00:16:30.617 "data_offset": 0, 00:16:30.617 "data_size": 65536 00:16:30.617 }, 00:16:30.617 { 00:16:30.617 "name": "BaseBdev3", 00:16:30.617 "uuid": "297ebd4a-25f7-45c9-beee-82c12e064757", 00:16:30.617 "is_configured": true, 00:16:30.617 "data_offset": 0, 00:16:30.617 "data_size": 65536 00:16:30.617 } 00:16:30.617 ] 00:16:30.617 } 00:16:30.617 } 00:16:30.617 }' 00:16:30.617 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.617 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:30.617 BaseBdev2 00:16:30.617 BaseBdev3' 00:16:30.617 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.617 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:30.617 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:30.876 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.876 "name": "BaseBdev1", 00:16:30.876 "aliases": [ 00:16:30.876 "10e90242-e855-49d9-b6de-5c934edbd587" 00:16:30.876 ], 00:16:30.876 "product_name": "Malloc disk", 00:16:30.876 "block_size": 512, 00:16:30.876 "num_blocks": 65536, 00:16:30.876 "uuid": "10e90242-e855-49d9-b6de-5c934edbd587", 00:16:30.876 "assigned_rate_limits": { 00:16:30.876 "rw_ios_per_sec": 0, 00:16:30.876 "rw_mbytes_per_sec": 0, 00:16:30.876 "r_mbytes_per_sec": 0, 00:16:30.876 "w_mbytes_per_sec": 0 00:16:30.876 }, 00:16:30.876 "claimed": true, 00:16:30.876 "claim_type": "exclusive_write", 00:16:30.876 "zoned": false, 00:16:30.876 "supported_io_types": { 00:16:30.876 "read": true, 00:16:30.876 "write": true, 00:16:30.876 "unmap": true, 00:16:30.876 "flush": true, 00:16:30.876 "reset": true, 00:16:30.876 "nvme_admin": false, 00:16:30.876 "nvme_io": false, 00:16:30.876 "nvme_io_md": false, 00:16:30.876 "write_zeroes": true, 00:16:30.876 "zcopy": true, 00:16:30.876 "get_zone_info": false, 00:16:30.876 "zone_management": false, 00:16:30.876 "zone_append": false, 00:16:30.876 "compare": false, 00:16:30.876 "compare_and_write": false, 00:16:30.876 "abort": true, 00:16:30.876 "seek_hole": false, 00:16:30.876 "seek_data": false, 00:16:30.876 "copy": true, 00:16:30.876 "nvme_iov_md": false 00:16:30.876 }, 00:16:30.876 "memory_domains": [ 00:16:30.876 { 00:16:30.876 "dma_device_id": "system", 00:16:30.876 "dma_device_type": 1 00:16:30.876 }, 00:16:30.876 { 00:16:30.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.876 "dma_device_type": 2 00:16:30.876 } 00:16:30.876 ], 00:16:30.876 "driver_specific": {} 00:16:30.876 }' 00:16:30.876 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.876 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.876 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.876 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.135 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.135 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.135 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.135 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.135 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.135 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.393 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.393 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.393 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.393 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:31.393 02:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.961 "name": "BaseBdev2", 00:16:31.961 "aliases": [ 00:16:31.961 "84519369-6998-40c3-8e7d-7bfe0ac33393" 00:16:31.961 ], 00:16:31.961 "product_name": "Malloc disk", 00:16:31.961 "block_size": 512, 00:16:31.961 "num_blocks": 65536, 00:16:31.961 "uuid": "84519369-6998-40c3-8e7d-7bfe0ac33393", 00:16:31.961 "assigned_rate_limits": { 00:16:31.961 "rw_ios_per_sec": 0, 00:16:31.961 "rw_mbytes_per_sec": 0, 00:16:31.961 "r_mbytes_per_sec": 0, 00:16:31.961 "w_mbytes_per_sec": 0 00:16:31.961 }, 00:16:31.961 "claimed": true, 00:16:31.961 "claim_type": "exclusive_write", 00:16:31.961 "zoned": false, 00:16:31.961 "supported_io_types": { 00:16:31.961 "read": true, 00:16:31.961 "write": true, 00:16:31.961 "unmap": true, 00:16:31.961 "flush": true, 00:16:31.961 "reset": true, 00:16:31.961 "nvme_admin": false, 00:16:31.961 "nvme_io": false, 00:16:31.961 "nvme_io_md": false, 00:16:31.961 "write_zeroes": true, 00:16:31.961 "zcopy": true, 00:16:31.961 "get_zone_info": false, 00:16:31.961 "zone_management": false, 00:16:31.961 "zone_append": false, 00:16:31.961 "compare": false, 00:16:31.961 "compare_and_write": false, 00:16:31.961 "abort": true, 00:16:31.961 "seek_hole": false, 00:16:31.961 "seek_data": false, 00:16:31.961 "copy": true, 00:16:31.961 "nvme_iov_md": false 00:16:31.961 }, 00:16:31.961 "memory_domains": [ 00:16:31.961 { 00:16:31.961 "dma_device_id": "system", 00:16:31.961 "dma_device_type": 1 00:16:31.961 }, 00:16:31.961 { 00:16:31.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.961 "dma_device_type": 2 00:16:31.961 } 00:16:31.961 ], 00:16:31.961 "driver_specific": {} 00:16:31.961 }' 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.961 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:32.219 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.483 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.483 "name": "BaseBdev3", 00:16:32.483 "aliases": [ 00:16:32.483 "297ebd4a-25f7-45c9-beee-82c12e064757" 00:16:32.483 ], 00:16:32.483 "product_name": "Malloc disk", 00:16:32.483 "block_size": 512, 00:16:32.483 "num_blocks": 65536, 00:16:32.483 "uuid": "297ebd4a-25f7-45c9-beee-82c12e064757", 00:16:32.483 "assigned_rate_limits": { 00:16:32.483 "rw_ios_per_sec": 0, 00:16:32.483 "rw_mbytes_per_sec": 0, 00:16:32.483 "r_mbytes_per_sec": 0, 00:16:32.483 "w_mbytes_per_sec": 0 00:16:32.483 }, 00:16:32.483 "claimed": true, 00:16:32.483 "claim_type": "exclusive_write", 00:16:32.483 "zoned": false, 00:16:32.483 "supported_io_types": { 00:16:32.483 "read": true, 00:16:32.483 "write": true, 00:16:32.483 "unmap": true, 00:16:32.483 "flush": true, 00:16:32.483 "reset": true, 00:16:32.483 "nvme_admin": false, 00:16:32.483 "nvme_io": false, 00:16:32.483 "nvme_io_md": false, 00:16:32.483 "write_zeroes": true, 00:16:32.483 "zcopy": true, 00:16:32.483 "get_zone_info": false, 00:16:32.483 "zone_management": false, 00:16:32.483 "zone_append": false, 00:16:32.483 "compare": false, 00:16:32.483 "compare_and_write": false, 00:16:32.483 "abort": true, 00:16:32.483 "seek_hole": false, 00:16:32.483 "seek_data": false, 00:16:32.483 "copy": true, 00:16:32.483 "nvme_iov_md": false 00:16:32.483 }, 00:16:32.483 "memory_domains": [ 00:16:32.483 { 00:16:32.483 "dma_device_id": "system", 00:16:32.483 "dma_device_type": 1 00:16:32.483 }, 00:16:32.483 { 00:16:32.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.483 "dma_device_type": 2 00:16:32.483 } 00:16:32.483 ], 00:16:32.483 "driver_specific": {} 00:16:32.483 }' 00:16:32.483 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.483 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.483 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.483 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.483 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.784 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.784 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.784 02:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.784 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.784 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.784 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.784 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.784 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:33.072 [2024-07-11 02:22:23.321243] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.072 [2024-07-11 02:22:23.321268] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.072 [2024-07-11 02:22:23.321306] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.072 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:33.072 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:33.072 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.072 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:33.072 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:33.072 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:33.072 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.073 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.334 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.334 "name": "Existed_Raid", 00:16:33.334 "uuid": "2bfdded5-04c9-44d0-bc09-69aa18c0a01c", 00:16:33.334 "strip_size_kb": 64, 00:16:33.334 "state": "offline", 00:16:33.334 "raid_level": "raid0", 00:16:33.334 "superblock": false, 00:16:33.334 "num_base_bdevs": 3, 00:16:33.334 "num_base_bdevs_discovered": 2, 00:16:33.334 "num_base_bdevs_operational": 2, 00:16:33.335 "base_bdevs_list": [ 00:16:33.335 { 00:16:33.335 "name": null, 00:16:33.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.335 "is_configured": false, 00:16:33.335 "data_offset": 0, 00:16:33.335 "data_size": 65536 00:16:33.335 }, 00:16:33.335 { 00:16:33.335 "name": "BaseBdev2", 00:16:33.335 "uuid": "84519369-6998-40c3-8e7d-7bfe0ac33393", 00:16:33.335 "is_configured": true, 00:16:33.335 "data_offset": 0, 00:16:33.335 "data_size": 65536 00:16:33.335 }, 00:16:33.335 { 00:16:33.335 "name": "BaseBdev3", 00:16:33.335 "uuid": "297ebd4a-25f7-45c9-beee-82c12e064757", 00:16:33.335 "is_configured": true, 00:16:33.335 "data_offset": 0, 00:16:33.335 "data_size": 65536 00:16:33.335 } 00:16:33.335 ] 00:16:33.335 }' 00:16:33.335 02:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.335 02:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.902 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:33.902 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:33.902 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.902 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.161 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.161 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.161 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:34.421 [2024-07-11 02:22:24.665846] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.421 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:34.421 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.421 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.421 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.681 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.681 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.681 02:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:34.940 [2024-07-11 02:22:25.175143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.940 [2024-07-11 02:22:25.175185] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb87cf0 name Existed_Raid, state offline 00:16:34.940 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:34.940 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.940 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.940 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:35.200 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:35.200 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:35.200 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:35.200 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:35.200 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:35.200 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.459 BaseBdev2 00:16:35.459 02:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:35.459 02:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:35.459 02:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.459 02:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:35.459 02:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.459 02:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.459 02:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.718 02:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.976 [ 00:16:35.976 { 00:16:35.976 "name": "BaseBdev2", 00:16:35.976 "aliases": [ 00:16:35.976 "5a6fdb51-a39c-4dfa-9036-6d6d1203d145" 00:16:35.976 ], 00:16:35.976 "product_name": "Malloc disk", 00:16:35.976 "block_size": 512, 00:16:35.976 "num_blocks": 65536, 00:16:35.976 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:35.976 "assigned_rate_limits": { 00:16:35.976 "rw_ios_per_sec": 0, 00:16:35.976 "rw_mbytes_per_sec": 0, 00:16:35.976 "r_mbytes_per_sec": 0, 00:16:35.976 "w_mbytes_per_sec": 0 00:16:35.976 }, 00:16:35.976 "claimed": false, 00:16:35.976 "zoned": false, 00:16:35.976 "supported_io_types": { 00:16:35.976 "read": true, 00:16:35.976 "write": true, 00:16:35.976 "unmap": true, 00:16:35.976 "flush": true, 00:16:35.976 "reset": true, 00:16:35.976 "nvme_admin": false, 00:16:35.976 "nvme_io": false, 00:16:35.976 "nvme_io_md": false, 00:16:35.976 "write_zeroes": true, 00:16:35.976 "zcopy": true, 00:16:35.976 "get_zone_info": false, 00:16:35.976 "zone_management": false, 00:16:35.976 "zone_append": false, 00:16:35.976 "compare": false, 00:16:35.976 "compare_and_write": false, 00:16:35.976 "abort": true, 00:16:35.976 "seek_hole": false, 00:16:35.976 "seek_data": false, 00:16:35.976 "copy": true, 00:16:35.976 "nvme_iov_md": false 00:16:35.976 }, 00:16:35.976 "memory_domains": [ 00:16:35.976 { 00:16:35.976 "dma_device_id": "system", 00:16:35.976 "dma_device_type": 1 00:16:35.976 }, 00:16:35.976 { 00:16:35.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.976 "dma_device_type": 2 00:16:35.976 } 00:16:35.976 ], 00:16:35.976 "driver_specific": {} 00:16:35.976 } 00:16:35.976 ] 00:16:35.976 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:35.976 02:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:35.976 02:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:35.976 02:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:36.235 BaseBdev3 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.235 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:36.494 [ 00:16:36.494 { 00:16:36.494 "name": "BaseBdev3", 00:16:36.494 "aliases": [ 00:16:36.494 "2e868cd8-87b8-4b38-8bcd-22fd083f93d4" 00:16:36.494 ], 00:16:36.494 "product_name": "Malloc disk", 00:16:36.494 "block_size": 512, 00:16:36.494 "num_blocks": 65536, 00:16:36.494 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:36.494 "assigned_rate_limits": { 00:16:36.494 "rw_ios_per_sec": 0, 00:16:36.494 "rw_mbytes_per_sec": 0, 00:16:36.494 "r_mbytes_per_sec": 0, 00:16:36.494 "w_mbytes_per_sec": 0 00:16:36.494 }, 00:16:36.494 "claimed": false, 00:16:36.494 "zoned": false, 00:16:36.494 "supported_io_types": { 00:16:36.494 "read": true, 00:16:36.494 "write": true, 00:16:36.494 "unmap": true, 00:16:36.494 "flush": true, 00:16:36.494 "reset": true, 00:16:36.494 "nvme_admin": false, 00:16:36.494 "nvme_io": false, 00:16:36.494 "nvme_io_md": false, 00:16:36.494 "write_zeroes": true, 00:16:36.494 "zcopy": true, 00:16:36.494 "get_zone_info": false, 00:16:36.494 "zone_management": false, 00:16:36.494 "zone_append": false, 00:16:36.494 "compare": false, 00:16:36.494 "compare_and_write": false, 00:16:36.494 "abort": true, 00:16:36.494 "seek_hole": false, 00:16:36.494 "seek_data": false, 00:16:36.494 "copy": true, 00:16:36.494 "nvme_iov_md": false 00:16:36.494 }, 00:16:36.494 "memory_domains": [ 00:16:36.494 { 00:16:36.494 "dma_device_id": "system", 00:16:36.494 "dma_device_type": 1 00:16:36.494 }, 00:16:36.494 { 00:16:36.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.494 "dma_device_type": 2 00:16:36.494 } 00:16:36.494 ], 00:16:36.494 "driver_specific": {} 00:16:36.494 } 00:16:36.494 ] 00:16:36.494 02:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:36.494 02:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:36.494 02:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:36.494 02:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:36.753 [2024-07-11 02:22:27.093015] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.753 [2024-07-11 02:22:27.093054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.753 [2024-07-11 02:22:27.093073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:36.753 [2024-07-11 02:22:27.094393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.753 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.011 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.011 "name": "Existed_Raid", 00:16:37.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.011 "strip_size_kb": 64, 00:16:37.011 "state": "configuring", 00:16:37.011 "raid_level": "raid0", 00:16:37.011 "superblock": false, 00:16:37.011 "num_base_bdevs": 3, 00:16:37.011 "num_base_bdevs_discovered": 2, 00:16:37.011 "num_base_bdevs_operational": 3, 00:16:37.011 "base_bdevs_list": [ 00:16:37.011 { 00:16:37.011 "name": "BaseBdev1", 00:16:37.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.011 "is_configured": false, 00:16:37.011 "data_offset": 0, 00:16:37.011 "data_size": 0 00:16:37.011 }, 00:16:37.011 { 00:16:37.011 "name": "BaseBdev2", 00:16:37.011 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:37.011 "is_configured": true, 00:16:37.011 "data_offset": 0, 00:16:37.011 "data_size": 65536 00:16:37.011 }, 00:16:37.011 { 00:16:37.011 "name": "BaseBdev3", 00:16:37.011 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:37.011 "is_configured": true, 00:16:37.011 "data_offset": 0, 00:16:37.011 "data_size": 65536 00:16:37.011 } 00:16:37.011 ] 00:16:37.011 }' 00:16:37.011 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.011 02:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.578 02:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:37.837 [2024-07-11 02:22:28.167842] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.837 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.096 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.096 "name": "Existed_Raid", 00:16:38.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.096 "strip_size_kb": 64, 00:16:38.096 "state": "configuring", 00:16:38.096 "raid_level": "raid0", 00:16:38.096 "superblock": false, 00:16:38.096 "num_base_bdevs": 3, 00:16:38.096 "num_base_bdevs_discovered": 1, 00:16:38.096 "num_base_bdevs_operational": 3, 00:16:38.096 "base_bdevs_list": [ 00:16:38.096 { 00:16:38.096 "name": "BaseBdev1", 00:16:38.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.096 "is_configured": false, 00:16:38.096 "data_offset": 0, 00:16:38.096 "data_size": 0 00:16:38.096 }, 00:16:38.096 { 00:16:38.096 "name": null, 00:16:38.096 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:38.096 "is_configured": false, 00:16:38.096 "data_offset": 0, 00:16:38.096 "data_size": 65536 00:16:38.096 }, 00:16:38.096 { 00:16:38.096 "name": "BaseBdev3", 00:16:38.096 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:38.096 "is_configured": true, 00:16:38.096 "data_offset": 0, 00:16:38.096 "data_size": 65536 00:16:38.096 } 00:16:38.096 ] 00:16:38.096 }' 00:16:38.096 02:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.096 02:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.665 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.665 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:38.923 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:38.923 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.182 [2024-07-11 02:22:29.511847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.182 BaseBdev1 00:16:39.182 02:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:39.182 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:39.182 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.182 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:39.182 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.182 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.182 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.441 02:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.701 [ 00:16:39.701 { 00:16:39.701 "name": "BaseBdev1", 00:16:39.701 "aliases": [ 00:16:39.701 "7c99ec97-57dc-40b8-87ad-93331459a6d7" 00:16:39.701 ], 00:16:39.701 "product_name": "Malloc disk", 00:16:39.701 "block_size": 512, 00:16:39.701 "num_blocks": 65536, 00:16:39.701 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:39.701 "assigned_rate_limits": { 00:16:39.701 "rw_ios_per_sec": 0, 00:16:39.701 "rw_mbytes_per_sec": 0, 00:16:39.701 "r_mbytes_per_sec": 0, 00:16:39.701 "w_mbytes_per_sec": 0 00:16:39.701 }, 00:16:39.701 "claimed": true, 00:16:39.701 "claim_type": "exclusive_write", 00:16:39.701 "zoned": false, 00:16:39.701 "supported_io_types": { 00:16:39.701 "read": true, 00:16:39.701 "write": true, 00:16:39.701 "unmap": true, 00:16:39.701 "flush": true, 00:16:39.701 "reset": true, 00:16:39.701 "nvme_admin": false, 00:16:39.701 "nvme_io": false, 00:16:39.701 "nvme_io_md": false, 00:16:39.701 "write_zeroes": true, 00:16:39.701 "zcopy": true, 00:16:39.701 "get_zone_info": false, 00:16:39.701 "zone_management": false, 00:16:39.701 "zone_append": false, 00:16:39.701 "compare": false, 00:16:39.701 "compare_and_write": false, 00:16:39.701 "abort": true, 00:16:39.701 "seek_hole": false, 00:16:39.701 "seek_data": false, 00:16:39.701 "copy": true, 00:16:39.701 "nvme_iov_md": false 00:16:39.701 }, 00:16:39.701 "memory_domains": [ 00:16:39.701 { 00:16:39.701 "dma_device_id": "system", 00:16:39.701 "dma_device_type": 1 00:16:39.701 }, 00:16:39.701 { 00:16:39.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.701 "dma_device_type": 2 00:16:39.701 } 00:16:39.701 ], 00:16:39.701 "driver_specific": {} 00:16:39.701 } 00:16:39.701 ] 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.701 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.960 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.960 "name": "Existed_Raid", 00:16:39.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.960 "strip_size_kb": 64, 00:16:39.960 "state": "configuring", 00:16:39.960 "raid_level": "raid0", 00:16:39.960 "superblock": false, 00:16:39.960 "num_base_bdevs": 3, 00:16:39.960 "num_base_bdevs_discovered": 2, 00:16:39.960 "num_base_bdevs_operational": 3, 00:16:39.960 "base_bdevs_list": [ 00:16:39.960 { 00:16:39.960 "name": "BaseBdev1", 00:16:39.960 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:39.960 "is_configured": true, 00:16:39.960 "data_offset": 0, 00:16:39.960 "data_size": 65536 00:16:39.960 }, 00:16:39.960 { 00:16:39.960 "name": null, 00:16:39.960 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:39.960 "is_configured": false, 00:16:39.960 "data_offset": 0, 00:16:39.960 "data_size": 65536 00:16:39.960 }, 00:16:39.960 { 00:16:39.960 "name": "BaseBdev3", 00:16:39.960 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:39.960 "is_configured": true, 00:16:39.960 "data_offset": 0, 00:16:39.960 "data_size": 65536 00:16:39.960 } 00:16:39.960 ] 00:16:39.960 }' 00:16:39.960 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.960 02:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.526 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.526 02:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:40.785 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:40.785 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:41.044 [2024-07-11 02:22:31.304637] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.044 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.304 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.304 "name": "Existed_Raid", 00:16:41.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.304 "strip_size_kb": 64, 00:16:41.304 "state": "configuring", 00:16:41.304 "raid_level": "raid0", 00:16:41.304 "superblock": false, 00:16:41.304 "num_base_bdevs": 3, 00:16:41.304 "num_base_bdevs_discovered": 1, 00:16:41.304 "num_base_bdevs_operational": 3, 00:16:41.304 "base_bdevs_list": [ 00:16:41.304 { 00:16:41.304 "name": "BaseBdev1", 00:16:41.304 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:41.304 "is_configured": true, 00:16:41.304 "data_offset": 0, 00:16:41.304 "data_size": 65536 00:16:41.304 }, 00:16:41.304 { 00:16:41.304 "name": null, 00:16:41.304 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:41.304 "is_configured": false, 00:16:41.304 "data_offset": 0, 00:16:41.304 "data_size": 65536 00:16:41.304 }, 00:16:41.304 { 00:16:41.304 "name": null, 00:16:41.304 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:41.304 "is_configured": false, 00:16:41.304 "data_offset": 0, 00:16:41.304 "data_size": 65536 00:16:41.304 } 00:16:41.304 ] 00:16:41.304 }' 00:16:41.304 02:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.304 02:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.872 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.872 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.130 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:42.130 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:42.390 [2024-07-11 02:22:32.668332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.390 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.649 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.649 "name": "Existed_Raid", 00:16:42.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.649 "strip_size_kb": 64, 00:16:42.649 "state": "configuring", 00:16:42.649 "raid_level": "raid0", 00:16:42.649 "superblock": false, 00:16:42.649 "num_base_bdevs": 3, 00:16:42.649 "num_base_bdevs_discovered": 2, 00:16:42.649 "num_base_bdevs_operational": 3, 00:16:42.649 "base_bdevs_list": [ 00:16:42.649 { 00:16:42.649 "name": "BaseBdev1", 00:16:42.649 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:42.649 "is_configured": true, 00:16:42.649 "data_offset": 0, 00:16:42.649 "data_size": 65536 00:16:42.649 }, 00:16:42.649 { 00:16:42.649 "name": null, 00:16:42.649 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:42.649 "is_configured": false, 00:16:42.649 "data_offset": 0, 00:16:42.649 "data_size": 65536 00:16:42.649 }, 00:16:42.649 { 00:16:42.649 "name": "BaseBdev3", 00:16:42.649 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:42.649 "is_configured": true, 00:16:42.649 "data_offset": 0, 00:16:42.649 "data_size": 65536 00:16:42.649 } 00:16:42.649 ] 00:16:42.649 }' 00:16:42.649 02:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.649 02:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.585 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.585 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.585 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:43.585 02:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:43.844 [2024-07-11 02:22:34.252555] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.103 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.671 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.671 "name": "Existed_Raid", 00:16:44.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.671 "strip_size_kb": 64, 00:16:44.671 "state": "configuring", 00:16:44.671 "raid_level": "raid0", 00:16:44.671 "superblock": false, 00:16:44.671 "num_base_bdevs": 3, 00:16:44.671 "num_base_bdevs_discovered": 1, 00:16:44.671 "num_base_bdevs_operational": 3, 00:16:44.671 "base_bdevs_list": [ 00:16:44.671 { 00:16:44.671 "name": null, 00:16:44.671 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:44.671 "is_configured": false, 00:16:44.671 "data_offset": 0, 00:16:44.671 "data_size": 65536 00:16:44.671 }, 00:16:44.671 { 00:16:44.671 "name": null, 00:16:44.671 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:44.671 "is_configured": false, 00:16:44.671 "data_offset": 0, 00:16:44.671 "data_size": 65536 00:16:44.671 }, 00:16:44.671 { 00:16:44.671 "name": "BaseBdev3", 00:16:44.671 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:44.671 "is_configured": true, 00:16:44.671 "data_offset": 0, 00:16:44.671 "data_size": 65536 00:16:44.671 } 00:16:44.671 ] 00:16:44.671 }' 00:16:44.671 02:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.672 02:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.239 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:45.239 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.239 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:45.239 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:45.498 [2024-07-11 02:22:35.868852] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.499 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.758 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.758 "name": "Existed_Raid", 00:16:45.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.758 "strip_size_kb": 64, 00:16:45.758 "state": "configuring", 00:16:45.758 "raid_level": "raid0", 00:16:45.758 "superblock": false, 00:16:45.758 "num_base_bdevs": 3, 00:16:45.758 "num_base_bdevs_discovered": 2, 00:16:45.758 "num_base_bdevs_operational": 3, 00:16:45.758 "base_bdevs_list": [ 00:16:45.758 { 00:16:45.758 "name": null, 00:16:45.758 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:45.758 "is_configured": false, 00:16:45.758 "data_offset": 0, 00:16:45.758 "data_size": 65536 00:16:45.758 }, 00:16:45.758 { 00:16:45.758 "name": "BaseBdev2", 00:16:45.758 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:45.758 "is_configured": true, 00:16:45.758 "data_offset": 0, 00:16:45.758 "data_size": 65536 00:16:45.758 }, 00:16:45.758 { 00:16:45.758 "name": "BaseBdev3", 00:16:45.758 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:45.758 "is_configured": true, 00:16:45.758 "data_offset": 0, 00:16:45.758 "data_size": 65536 00:16:45.758 } 00:16:45.758 ] 00:16:45.758 }' 00:16:45.758 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.758 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.326 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:46.326 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.585 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:46.585 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.585 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:46.844 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7c99ec97-57dc-40b8-87ad-93331459a6d7 00:16:47.104 [2024-07-11 02:22:37.469595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:47.104 [2024-07-11 02:22:37.469635] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9d5bc0 00:16:47.104 [2024-07-11 02:22:37.469644] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:47.104 [2024-07-11 02:22:37.469843] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9d7a90 00:16:47.104 [2024-07-11 02:22:37.469959] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9d5bc0 00:16:47.104 [2024-07-11 02:22:37.469968] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9d5bc0 00:16:47.104 [2024-07-11 02:22:37.470121] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.104 NewBaseBdev 00:16:47.104 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:47.104 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:47.104 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:47.104 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:47.104 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:47.104 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:47.104 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:47.363 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:47.641 [ 00:16:47.641 { 00:16:47.641 "name": "NewBaseBdev", 00:16:47.641 "aliases": [ 00:16:47.641 "7c99ec97-57dc-40b8-87ad-93331459a6d7" 00:16:47.641 ], 00:16:47.641 "product_name": "Malloc disk", 00:16:47.641 "block_size": 512, 00:16:47.641 "num_blocks": 65536, 00:16:47.641 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:47.641 "assigned_rate_limits": { 00:16:47.641 "rw_ios_per_sec": 0, 00:16:47.641 "rw_mbytes_per_sec": 0, 00:16:47.641 "r_mbytes_per_sec": 0, 00:16:47.641 "w_mbytes_per_sec": 0 00:16:47.641 }, 00:16:47.641 "claimed": true, 00:16:47.641 "claim_type": "exclusive_write", 00:16:47.641 "zoned": false, 00:16:47.641 "supported_io_types": { 00:16:47.641 "read": true, 00:16:47.641 "write": true, 00:16:47.641 "unmap": true, 00:16:47.641 "flush": true, 00:16:47.641 "reset": true, 00:16:47.642 "nvme_admin": false, 00:16:47.642 "nvme_io": false, 00:16:47.642 "nvme_io_md": false, 00:16:47.642 "write_zeroes": true, 00:16:47.642 "zcopy": true, 00:16:47.642 "get_zone_info": false, 00:16:47.642 "zone_management": false, 00:16:47.642 "zone_append": false, 00:16:47.642 "compare": false, 00:16:47.642 "compare_and_write": false, 00:16:47.642 "abort": true, 00:16:47.642 "seek_hole": false, 00:16:47.642 "seek_data": false, 00:16:47.642 "copy": true, 00:16:47.642 "nvme_iov_md": false 00:16:47.642 }, 00:16:47.642 "memory_domains": [ 00:16:47.642 { 00:16:47.642 "dma_device_id": "system", 00:16:47.642 "dma_device_type": 1 00:16:47.642 }, 00:16:47.642 { 00:16:47.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.642 "dma_device_type": 2 00:16:47.642 } 00:16:47.642 ], 00:16:47.642 "driver_specific": {} 00:16:47.642 } 00:16:47.642 ] 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.642 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.899 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.899 "name": "Existed_Raid", 00:16:47.899 "uuid": "f24b2a08-e35c-4d58-b326-b3dfc306aca0", 00:16:47.899 "strip_size_kb": 64, 00:16:47.899 "state": "online", 00:16:47.899 "raid_level": "raid0", 00:16:47.899 "superblock": false, 00:16:47.899 "num_base_bdevs": 3, 00:16:47.899 "num_base_bdevs_discovered": 3, 00:16:47.899 "num_base_bdevs_operational": 3, 00:16:47.899 "base_bdevs_list": [ 00:16:47.899 { 00:16:47.899 "name": "NewBaseBdev", 00:16:47.899 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:47.899 "is_configured": true, 00:16:47.899 "data_offset": 0, 00:16:47.899 "data_size": 65536 00:16:47.899 }, 00:16:47.899 { 00:16:47.899 "name": "BaseBdev2", 00:16:47.899 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:47.899 "is_configured": true, 00:16:47.899 "data_offset": 0, 00:16:47.899 "data_size": 65536 00:16:47.899 }, 00:16:47.899 { 00:16:47.899 "name": "BaseBdev3", 00:16:47.899 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:47.899 "is_configured": true, 00:16:47.899 "data_offset": 0, 00:16:47.899 "data_size": 65536 00:16:47.899 } 00:16:47.899 ] 00:16:47.899 }' 00:16:47.899 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.899 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.464 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:48.465 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:48.465 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:48.465 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:48.465 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:48.465 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:48.465 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:48.465 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:48.723 [2024-07-11 02:22:39.026037] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.723 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:48.723 "name": "Existed_Raid", 00:16:48.723 "aliases": [ 00:16:48.723 "f24b2a08-e35c-4d58-b326-b3dfc306aca0" 00:16:48.723 ], 00:16:48.723 "product_name": "Raid Volume", 00:16:48.723 "block_size": 512, 00:16:48.723 "num_blocks": 196608, 00:16:48.723 "uuid": "f24b2a08-e35c-4d58-b326-b3dfc306aca0", 00:16:48.723 "assigned_rate_limits": { 00:16:48.723 "rw_ios_per_sec": 0, 00:16:48.723 "rw_mbytes_per_sec": 0, 00:16:48.723 "r_mbytes_per_sec": 0, 00:16:48.723 "w_mbytes_per_sec": 0 00:16:48.723 }, 00:16:48.723 "claimed": false, 00:16:48.723 "zoned": false, 00:16:48.723 "supported_io_types": { 00:16:48.723 "read": true, 00:16:48.723 "write": true, 00:16:48.723 "unmap": true, 00:16:48.723 "flush": true, 00:16:48.723 "reset": true, 00:16:48.723 "nvme_admin": false, 00:16:48.723 "nvme_io": false, 00:16:48.723 "nvme_io_md": false, 00:16:48.723 "write_zeroes": true, 00:16:48.723 "zcopy": false, 00:16:48.723 "get_zone_info": false, 00:16:48.723 "zone_management": false, 00:16:48.723 "zone_append": false, 00:16:48.723 "compare": false, 00:16:48.723 "compare_and_write": false, 00:16:48.723 "abort": false, 00:16:48.723 "seek_hole": false, 00:16:48.723 "seek_data": false, 00:16:48.723 "copy": false, 00:16:48.723 "nvme_iov_md": false 00:16:48.723 }, 00:16:48.723 "memory_domains": [ 00:16:48.723 { 00:16:48.723 "dma_device_id": "system", 00:16:48.723 "dma_device_type": 1 00:16:48.723 }, 00:16:48.723 { 00:16:48.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.723 "dma_device_type": 2 00:16:48.723 }, 00:16:48.723 { 00:16:48.723 "dma_device_id": "system", 00:16:48.723 "dma_device_type": 1 00:16:48.723 }, 00:16:48.723 { 00:16:48.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.723 "dma_device_type": 2 00:16:48.723 }, 00:16:48.723 { 00:16:48.723 "dma_device_id": "system", 00:16:48.723 "dma_device_type": 1 00:16:48.723 }, 00:16:48.723 { 00:16:48.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.724 "dma_device_type": 2 00:16:48.724 } 00:16:48.724 ], 00:16:48.724 "driver_specific": { 00:16:48.724 "raid": { 00:16:48.724 "uuid": "f24b2a08-e35c-4d58-b326-b3dfc306aca0", 00:16:48.724 "strip_size_kb": 64, 00:16:48.724 "state": "online", 00:16:48.724 "raid_level": "raid0", 00:16:48.724 "superblock": false, 00:16:48.724 "num_base_bdevs": 3, 00:16:48.724 "num_base_bdevs_discovered": 3, 00:16:48.724 "num_base_bdevs_operational": 3, 00:16:48.724 "base_bdevs_list": [ 00:16:48.724 { 00:16:48.724 "name": "NewBaseBdev", 00:16:48.724 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:48.724 "is_configured": true, 00:16:48.724 "data_offset": 0, 00:16:48.724 "data_size": 65536 00:16:48.724 }, 00:16:48.724 { 00:16:48.724 "name": "BaseBdev2", 00:16:48.724 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:48.724 "is_configured": true, 00:16:48.724 "data_offset": 0, 00:16:48.724 "data_size": 65536 00:16:48.724 }, 00:16:48.724 { 00:16:48.724 "name": "BaseBdev3", 00:16:48.724 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:48.724 "is_configured": true, 00:16:48.724 "data_offset": 0, 00:16:48.724 "data_size": 65536 00:16:48.724 } 00:16:48.724 ] 00:16:48.724 } 00:16:48.724 } 00:16:48.724 }' 00:16:48.724 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.724 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:48.724 BaseBdev2 00:16:48.724 BaseBdev3' 00:16:48.724 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:48.724 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:48.724 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:48.982 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:48.982 "name": "NewBaseBdev", 00:16:48.982 "aliases": [ 00:16:48.982 "7c99ec97-57dc-40b8-87ad-93331459a6d7" 00:16:48.982 ], 00:16:48.982 "product_name": "Malloc disk", 00:16:48.982 "block_size": 512, 00:16:48.982 "num_blocks": 65536, 00:16:48.982 "uuid": "7c99ec97-57dc-40b8-87ad-93331459a6d7", 00:16:48.982 "assigned_rate_limits": { 00:16:48.982 "rw_ios_per_sec": 0, 00:16:48.982 "rw_mbytes_per_sec": 0, 00:16:48.982 "r_mbytes_per_sec": 0, 00:16:48.982 "w_mbytes_per_sec": 0 00:16:48.982 }, 00:16:48.982 "claimed": true, 00:16:48.982 "claim_type": "exclusive_write", 00:16:48.982 "zoned": false, 00:16:48.982 "supported_io_types": { 00:16:48.982 "read": true, 00:16:48.982 "write": true, 00:16:48.982 "unmap": true, 00:16:48.982 "flush": true, 00:16:48.982 "reset": true, 00:16:48.982 "nvme_admin": false, 00:16:48.982 "nvme_io": false, 00:16:48.982 "nvme_io_md": false, 00:16:48.982 "write_zeroes": true, 00:16:48.982 "zcopy": true, 00:16:48.982 "get_zone_info": false, 00:16:48.982 "zone_management": false, 00:16:48.982 "zone_append": false, 00:16:48.982 "compare": false, 00:16:48.982 "compare_and_write": false, 00:16:48.982 "abort": true, 00:16:48.982 "seek_hole": false, 00:16:48.982 "seek_data": false, 00:16:48.982 "copy": true, 00:16:48.982 "nvme_iov_md": false 00:16:48.982 }, 00:16:48.982 "memory_domains": [ 00:16:48.982 { 00:16:48.982 "dma_device_id": "system", 00:16:48.982 "dma_device_type": 1 00:16:48.982 }, 00:16:48.982 { 00:16:48.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.982 "dma_device_type": 2 00:16:48.982 } 00:16:48.982 ], 00:16:48.982 "driver_specific": {} 00:16:48.982 }' 00:16:48.982 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.982 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.240 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:49.498 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:49.498 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:49.498 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:49.498 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:49.757 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:49.757 "name": "BaseBdev2", 00:16:49.757 "aliases": [ 00:16:49.757 "5a6fdb51-a39c-4dfa-9036-6d6d1203d145" 00:16:49.757 ], 00:16:49.757 "product_name": "Malloc disk", 00:16:49.757 "block_size": 512, 00:16:49.757 "num_blocks": 65536, 00:16:49.757 "uuid": "5a6fdb51-a39c-4dfa-9036-6d6d1203d145", 00:16:49.757 "assigned_rate_limits": { 00:16:49.757 "rw_ios_per_sec": 0, 00:16:49.757 "rw_mbytes_per_sec": 0, 00:16:49.757 "r_mbytes_per_sec": 0, 00:16:49.757 "w_mbytes_per_sec": 0 00:16:49.757 }, 00:16:49.757 "claimed": true, 00:16:49.757 "claim_type": "exclusive_write", 00:16:49.757 "zoned": false, 00:16:49.757 "supported_io_types": { 00:16:49.757 "read": true, 00:16:49.757 "write": true, 00:16:49.757 "unmap": true, 00:16:49.757 "flush": true, 00:16:49.757 "reset": true, 00:16:49.757 "nvme_admin": false, 00:16:49.757 "nvme_io": false, 00:16:49.757 "nvme_io_md": false, 00:16:49.757 "write_zeroes": true, 00:16:49.757 "zcopy": true, 00:16:49.757 "get_zone_info": false, 00:16:49.757 "zone_management": false, 00:16:49.757 "zone_append": false, 00:16:49.757 "compare": false, 00:16:49.757 "compare_and_write": false, 00:16:49.757 "abort": true, 00:16:49.757 "seek_hole": false, 00:16:49.757 "seek_data": false, 00:16:49.757 "copy": true, 00:16:49.757 "nvme_iov_md": false 00:16:49.757 }, 00:16:49.757 "memory_domains": [ 00:16:49.757 { 00:16:49.757 "dma_device_id": "system", 00:16:49.757 "dma_device_type": 1 00:16:49.757 }, 00:16:49.757 { 00:16:49.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.757 "dma_device_type": 2 00:16:49.757 } 00:16:49.757 ], 00:16:49.757 "driver_specific": {} 00:16:49.757 }' 00:16:49.758 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.758 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:49.758 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:49.758 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.758 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:49.758 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:49.758 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:50.016 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.275 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.275 "name": "BaseBdev3", 00:16:50.275 "aliases": [ 00:16:50.275 "2e868cd8-87b8-4b38-8bcd-22fd083f93d4" 00:16:50.275 ], 00:16:50.275 "product_name": "Malloc disk", 00:16:50.275 "block_size": 512, 00:16:50.275 "num_blocks": 65536, 00:16:50.275 "uuid": "2e868cd8-87b8-4b38-8bcd-22fd083f93d4", 00:16:50.275 "assigned_rate_limits": { 00:16:50.275 "rw_ios_per_sec": 0, 00:16:50.275 "rw_mbytes_per_sec": 0, 00:16:50.275 "r_mbytes_per_sec": 0, 00:16:50.275 "w_mbytes_per_sec": 0 00:16:50.275 }, 00:16:50.275 "claimed": true, 00:16:50.275 "claim_type": "exclusive_write", 00:16:50.275 "zoned": false, 00:16:50.275 "supported_io_types": { 00:16:50.275 "read": true, 00:16:50.275 "write": true, 00:16:50.275 "unmap": true, 00:16:50.275 "flush": true, 00:16:50.275 "reset": true, 00:16:50.275 "nvme_admin": false, 00:16:50.275 "nvme_io": false, 00:16:50.275 "nvme_io_md": false, 00:16:50.275 "write_zeroes": true, 00:16:50.275 "zcopy": true, 00:16:50.275 "get_zone_info": false, 00:16:50.275 "zone_management": false, 00:16:50.275 "zone_append": false, 00:16:50.275 "compare": false, 00:16:50.275 "compare_and_write": false, 00:16:50.275 "abort": true, 00:16:50.275 "seek_hole": false, 00:16:50.275 "seek_data": false, 00:16:50.275 "copy": true, 00:16:50.275 "nvme_iov_md": false 00:16:50.275 }, 00:16:50.275 "memory_domains": [ 00:16:50.275 { 00:16:50.275 "dma_device_id": "system", 00:16:50.275 "dma_device_type": 1 00:16:50.275 }, 00:16:50.275 { 00:16:50.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.275 "dma_device_type": 2 00:16:50.275 } 00:16:50.275 ], 00:16:50.275 "driver_specific": {} 00:16:50.275 }' 00:16:50.275 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.275 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.534 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.792 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.792 02:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:50.792 [2024-07-11 02:22:41.203522] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.792 [2024-07-11 02:22:41.203547] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.792 [2024-07-11 02:22:41.203600] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.792 [2024-07-11 02:22:41.203650] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.792 [2024-07-11 02:22:41.203662] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9d5bc0 name Existed_Raid, state offline 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1913196 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1913196 ']' 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1913196 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1913196 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1913196' 00:16:51.051 killing process with pid 1913196 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1913196 00:16:51.051 [2024-07-11 02:22:41.279117] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.051 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1913196 00:16:51.051 [2024-07-11 02:22:41.305582] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:51.310 00:16:51.310 real 0m30.847s 00:16:51.310 user 0m56.668s 00:16:51.310 sys 0m5.506s 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.310 ************************************ 00:16:51.310 END TEST raid_state_function_test 00:16:51.310 ************************************ 00:16:51.310 02:22:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:51.310 02:22:41 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:51.310 02:22:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:51.310 02:22:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.310 02:22:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.310 ************************************ 00:16:51.310 START TEST raid_state_function_test_sb 00:16:51.310 ************************************ 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:51.310 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1917982 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1917982' 00:16:51.311 Process raid pid: 1917982 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1917982 /var/tmp/spdk-raid.sock 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1917982 ']' 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:51.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.311 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.311 [2024-07-11 02:22:41.656343] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:16:51.311 [2024-07-11 02:22:41.656403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.569 [2024-07-11 02:22:41.781536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.569 [2024-07-11 02:22:41.834565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.569 [2024-07-11 02:22:41.895945] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.569 [2024-07-11 02:22:41.895975] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:52.506 [2024-07-11 02:22:42.822348] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.506 [2024-07-11 02:22:42.822389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.506 [2024-07-11 02:22:42.822400] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.506 [2024-07-11 02:22:42.822412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.506 [2024-07-11 02:22:42.822421] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.506 [2024-07-11 02:22:42.822432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.506 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.765 02:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.765 "name": "Existed_Raid", 00:16:52.765 "uuid": "f0a7fc73-87a1-4a1e-a3d0-e817aca7d400", 00:16:52.765 "strip_size_kb": 64, 00:16:52.765 "state": "configuring", 00:16:52.765 "raid_level": "raid0", 00:16:52.765 "superblock": true, 00:16:52.765 "num_base_bdevs": 3, 00:16:52.765 "num_base_bdevs_discovered": 0, 00:16:52.765 "num_base_bdevs_operational": 3, 00:16:52.765 "base_bdevs_list": [ 00:16:52.765 { 00:16:52.765 "name": "BaseBdev1", 00:16:52.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.765 "is_configured": false, 00:16:52.765 "data_offset": 0, 00:16:52.765 "data_size": 0 00:16:52.765 }, 00:16:52.765 { 00:16:52.765 "name": "BaseBdev2", 00:16:52.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.765 "is_configured": false, 00:16:52.765 "data_offset": 0, 00:16:52.765 "data_size": 0 00:16:52.765 }, 00:16:52.765 { 00:16:52.765 "name": "BaseBdev3", 00:16:52.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.765 "is_configured": false, 00:16:52.765 "data_offset": 0, 00:16:52.765 "data_size": 0 00:16:52.765 } 00:16:52.765 ] 00:16:52.765 }' 00:16:52.765 02:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.765 02:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.334 02:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.591 [2024-07-11 02:22:43.925105] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.591 [2024-07-11 02:22:43.925136] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbcb5a0 name Existed_Raid, state configuring 00:16:53.591 02:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:53.868 [2024-07-11 02:22:44.177803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.868 [2024-07-11 02:22:44.177832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.868 [2024-07-11 02:22:44.177842] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.868 [2024-07-11 02:22:44.177853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.868 [2024-07-11 02:22:44.177862] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.868 [2024-07-11 02:22:44.177873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.868 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.181 [2024-07-11 02:22:44.436345] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.181 BaseBdev1 00:16:54.181 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:54.181 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:54.181 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:54.181 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:54.181 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:54.181 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:54.181 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.440 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.698 [ 00:16:54.698 { 00:16:54.698 "name": "BaseBdev1", 00:16:54.698 "aliases": [ 00:16:54.698 "4852419d-9b81-4288-b0fb-57a5a1013d20" 00:16:54.698 ], 00:16:54.698 "product_name": "Malloc disk", 00:16:54.698 "block_size": 512, 00:16:54.698 "num_blocks": 65536, 00:16:54.698 "uuid": "4852419d-9b81-4288-b0fb-57a5a1013d20", 00:16:54.698 "assigned_rate_limits": { 00:16:54.698 "rw_ios_per_sec": 0, 00:16:54.698 "rw_mbytes_per_sec": 0, 00:16:54.698 "r_mbytes_per_sec": 0, 00:16:54.698 "w_mbytes_per_sec": 0 00:16:54.698 }, 00:16:54.698 "claimed": true, 00:16:54.698 "claim_type": "exclusive_write", 00:16:54.698 "zoned": false, 00:16:54.698 "supported_io_types": { 00:16:54.698 "read": true, 00:16:54.698 "write": true, 00:16:54.698 "unmap": true, 00:16:54.698 "flush": true, 00:16:54.698 "reset": true, 00:16:54.698 "nvme_admin": false, 00:16:54.698 "nvme_io": false, 00:16:54.698 "nvme_io_md": false, 00:16:54.698 "write_zeroes": true, 00:16:54.698 "zcopy": true, 00:16:54.698 "get_zone_info": false, 00:16:54.699 "zone_management": false, 00:16:54.699 "zone_append": false, 00:16:54.699 "compare": false, 00:16:54.699 "compare_and_write": false, 00:16:54.699 "abort": true, 00:16:54.699 "seek_hole": false, 00:16:54.699 "seek_data": false, 00:16:54.699 "copy": true, 00:16:54.699 "nvme_iov_md": false 00:16:54.699 }, 00:16:54.699 "memory_domains": [ 00:16:54.699 { 00:16:54.699 "dma_device_id": "system", 00:16:54.699 "dma_device_type": 1 00:16:54.699 }, 00:16:54.699 { 00:16:54.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.699 "dma_device_type": 2 00:16:54.699 } 00:16:54.699 ], 00:16:54.699 "driver_specific": {} 00:16:54.699 } 00:16:54.699 ] 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.699 02:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.957 02:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.957 "name": "Existed_Raid", 00:16:54.957 "uuid": "c91a8415-bc7e-422f-a9cc-31fefa54019b", 00:16:54.957 "strip_size_kb": 64, 00:16:54.957 "state": "configuring", 00:16:54.957 "raid_level": "raid0", 00:16:54.957 "superblock": true, 00:16:54.957 "num_base_bdevs": 3, 00:16:54.957 "num_base_bdevs_discovered": 1, 00:16:54.957 "num_base_bdevs_operational": 3, 00:16:54.957 "base_bdevs_list": [ 00:16:54.957 { 00:16:54.957 "name": "BaseBdev1", 00:16:54.957 "uuid": "4852419d-9b81-4288-b0fb-57a5a1013d20", 00:16:54.957 "is_configured": true, 00:16:54.957 "data_offset": 2048, 00:16:54.957 "data_size": 63488 00:16:54.957 }, 00:16:54.957 { 00:16:54.957 "name": "BaseBdev2", 00:16:54.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.957 "is_configured": false, 00:16:54.958 "data_offset": 0, 00:16:54.958 "data_size": 0 00:16:54.958 }, 00:16:54.958 { 00:16:54.958 "name": "BaseBdev3", 00:16:54.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.958 "is_configured": false, 00:16:54.958 "data_offset": 0, 00:16:54.958 "data_size": 0 00:16:54.958 } 00:16:54.958 ] 00:16:54.958 }' 00:16:54.958 02:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.958 02:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.524 02:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:55.524 [2024-07-11 02:22:45.904255] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.524 [2024-07-11 02:22:45.904296] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbcaed0 name Existed_Raid, state configuring 00:16:55.524 02:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:55.783 [2024-07-11 02:22:46.076753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.783 [2024-07-11 02:22:46.078155] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.783 [2024-07-11 02:22:46.078187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.783 [2024-07-11 02:22:46.078198] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.783 [2024-07-11 02:22:46.078210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.783 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.042 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.042 "name": "Existed_Raid", 00:16:56.042 "uuid": "19fc4d6a-f1ad-4c8d-b064-af10f7b06f8a", 00:16:56.042 "strip_size_kb": 64, 00:16:56.042 "state": "configuring", 00:16:56.042 "raid_level": "raid0", 00:16:56.042 "superblock": true, 00:16:56.042 "num_base_bdevs": 3, 00:16:56.042 "num_base_bdevs_discovered": 1, 00:16:56.042 "num_base_bdevs_operational": 3, 00:16:56.042 "base_bdevs_list": [ 00:16:56.042 { 00:16:56.042 "name": "BaseBdev1", 00:16:56.042 "uuid": "4852419d-9b81-4288-b0fb-57a5a1013d20", 00:16:56.042 "is_configured": true, 00:16:56.042 "data_offset": 2048, 00:16:56.042 "data_size": 63488 00:16:56.042 }, 00:16:56.042 { 00:16:56.042 "name": "BaseBdev2", 00:16:56.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.042 "is_configured": false, 00:16:56.042 "data_offset": 0, 00:16:56.042 "data_size": 0 00:16:56.042 }, 00:16:56.042 { 00:16:56.042 "name": "BaseBdev3", 00:16:56.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.042 "is_configured": false, 00:16:56.042 "data_offset": 0, 00:16:56.042 "data_size": 0 00:16:56.042 } 00:16:56.042 ] 00:16:56.042 }' 00:16:56.042 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.042 02:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.611 02:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.870 [2024-07-11 02:22:47.139043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.870 BaseBdev2 00:16:56.870 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:56.870 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:56.870 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.870 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:56.870 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.870 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.870 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.128 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:57.388 [ 00:16:57.388 { 00:16:57.388 "name": "BaseBdev2", 00:16:57.388 "aliases": [ 00:16:57.388 "b1289d4b-2730-450f-9ba7-0278ca5f4c5d" 00:16:57.388 ], 00:16:57.388 "product_name": "Malloc disk", 00:16:57.388 "block_size": 512, 00:16:57.388 "num_blocks": 65536, 00:16:57.388 "uuid": "b1289d4b-2730-450f-9ba7-0278ca5f4c5d", 00:16:57.388 "assigned_rate_limits": { 00:16:57.388 "rw_ios_per_sec": 0, 00:16:57.388 "rw_mbytes_per_sec": 0, 00:16:57.388 "r_mbytes_per_sec": 0, 00:16:57.388 "w_mbytes_per_sec": 0 00:16:57.388 }, 00:16:57.388 "claimed": true, 00:16:57.388 "claim_type": "exclusive_write", 00:16:57.388 "zoned": false, 00:16:57.388 "supported_io_types": { 00:16:57.388 "read": true, 00:16:57.388 "write": true, 00:16:57.388 "unmap": true, 00:16:57.388 "flush": true, 00:16:57.388 "reset": true, 00:16:57.388 "nvme_admin": false, 00:16:57.388 "nvme_io": false, 00:16:57.388 "nvme_io_md": false, 00:16:57.388 "write_zeroes": true, 00:16:57.388 "zcopy": true, 00:16:57.388 "get_zone_info": false, 00:16:57.388 "zone_management": false, 00:16:57.388 "zone_append": false, 00:16:57.388 "compare": false, 00:16:57.388 "compare_and_write": false, 00:16:57.388 "abort": true, 00:16:57.388 "seek_hole": false, 00:16:57.388 "seek_data": false, 00:16:57.388 "copy": true, 00:16:57.388 "nvme_iov_md": false 00:16:57.388 }, 00:16:57.388 "memory_domains": [ 00:16:57.388 { 00:16:57.388 "dma_device_id": "system", 00:16:57.388 "dma_device_type": 1 00:16:57.388 }, 00:16:57.388 { 00:16:57.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.388 "dma_device_type": 2 00:16:57.388 } 00:16:57.388 ], 00:16:57.388 "driver_specific": {} 00:16:57.388 } 00:16:57.388 ] 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.388 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.647 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.647 "name": "Existed_Raid", 00:16:57.647 "uuid": "19fc4d6a-f1ad-4c8d-b064-af10f7b06f8a", 00:16:57.647 "strip_size_kb": 64, 00:16:57.647 "state": "configuring", 00:16:57.647 "raid_level": "raid0", 00:16:57.647 "superblock": true, 00:16:57.647 "num_base_bdevs": 3, 00:16:57.647 "num_base_bdevs_discovered": 2, 00:16:57.647 "num_base_bdevs_operational": 3, 00:16:57.647 "base_bdevs_list": [ 00:16:57.647 { 00:16:57.647 "name": "BaseBdev1", 00:16:57.647 "uuid": "4852419d-9b81-4288-b0fb-57a5a1013d20", 00:16:57.648 "is_configured": true, 00:16:57.648 "data_offset": 2048, 00:16:57.648 "data_size": 63488 00:16:57.648 }, 00:16:57.648 { 00:16:57.648 "name": "BaseBdev2", 00:16:57.648 "uuid": "b1289d4b-2730-450f-9ba7-0278ca5f4c5d", 00:16:57.648 "is_configured": true, 00:16:57.648 "data_offset": 2048, 00:16:57.648 "data_size": 63488 00:16:57.648 }, 00:16:57.648 { 00:16:57.648 "name": "BaseBdev3", 00:16:57.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.648 "is_configured": false, 00:16:57.648 "data_offset": 0, 00:16:57.648 "data_size": 0 00:16:57.648 } 00:16:57.648 ] 00:16:57.648 }' 00:16:57.648 02:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.648 02:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.216 02:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:58.475 [2024-07-11 02:22:48.662636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.475 [2024-07-11 02:22:48.662812] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd7dcf0 00:16:58.475 [2024-07-11 02:22:48.662828] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.475 [2024-07-11 02:22:48.663000] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbce640 00:16:58.475 [2024-07-11 02:22:48.663120] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd7dcf0 00:16:58.475 [2024-07-11 02:22:48.663130] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd7dcf0 00:16:58.475 [2024-07-11 02:22:48.663221] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.475 BaseBdev3 00:16:58.475 02:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:58.475 02:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:58.475 02:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:58.475 02:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:58.475 02:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:58.475 02:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:58.475 02:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.734 02:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:58.994 [ 00:16:58.994 { 00:16:58.994 "name": "BaseBdev3", 00:16:58.994 "aliases": [ 00:16:58.994 "57bf3178-e73f-4561-b764-7a93253d1785" 00:16:58.994 ], 00:16:58.994 "product_name": "Malloc disk", 00:16:58.994 "block_size": 512, 00:16:58.994 "num_blocks": 65536, 00:16:58.994 "uuid": "57bf3178-e73f-4561-b764-7a93253d1785", 00:16:58.994 "assigned_rate_limits": { 00:16:58.994 "rw_ios_per_sec": 0, 00:16:58.994 "rw_mbytes_per_sec": 0, 00:16:58.994 "r_mbytes_per_sec": 0, 00:16:58.994 "w_mbytes_per_sec": 0 00:16:58.994 }, 00:16:58.994 "claimed": true, 00:16:58.994 "claim_type": "exclusive_write", 00:16:58.994 "zoned": false, 00:16:58.994 "supported_io_types": { 00:16:58.994 "read": true, 00:16:58.994 "write": true, 00:16:58.994 "unmap": true, 00:16:58.994 "flush": true, 00:16:58.994 "reset": true, 00:16:58.994 "nvme_admin": false, 00:16:58.994 "nvme_io": false, 00:16:58.994 "nvme_io_md": false, 00:16:58.994 "write_zeroes": true, 00:16:58.994 "zcopy": true, 00:16:58.994 "get_zone_info": false, 00:16:58.994 "zone_management": false, 00:16:58.994 "zone_append": false, 00:16:58.994 "compare": false, 00:16:58.994 "compare_and_write": false, 00:16:58.994 "abort": true, 00:16:58.994 "seek_hole": false, 00:16:58.994 "seek_data": false, 00:16:58.994 "copy": true, 00:16:58.994 "nvme_iov_md": false 00:16:58.994 }, 00:16:58.994 "memory_domains": [ 00:16:58.994 { 00:16:58.994 "dma_device_id": "system", 00:16:58.994 "dma_device_type": 1 00:16:58.994 }, 00:16:58.994 { 00:16:58.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.994 "dma_device_type": 2 00:16:58.994 } 00:16:58.994 ], 00:16:58.994 "driver_specific": {} 00:16:58.994 } 00:16:58.994 ] 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.994 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.252 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.253 "name": "Existed_Raid", 00:16:59.253 "uuid": "19fc4d6a-f1ad-4c8d-b064-af10f7b06f8a", 00:16:59.253 "strip_size_kb": 64, 00:16:59.253 "state": "online", 00:16:59.253 "raid_level": "raid0", 00:16:59.253 "superblock": true, 00:16:59.253 "num_base_bdevs": 3, 00:16:59.253 "num_base_bdevs_discovered": 3, 00:16:59.253 "num_base_bdevs_operational": 3, 00:16:59.253 "base_bdevs_list": [ 00:16:59.253 { 00:16:59.253 "name": "BaseBdev1", 00:16:59.253 "uuid": "4852419d-9b81-4288-b0fb-57a5a1013d20", 00:16:59.253 "is_configured": true, 00:16:59.253 "data_offset": 2048, 00:16:59.253 "data_size": 63488 00:16:59.253 }, 00:16:59.253 { 00:16:59.253 "name": "BaseBdev2", 00:16:59.253 "uuid": "b1289d4b-2730-450f-9ba7-0278ca5f4c5d", 00:16:59.253 "is_configured": true, 00:16:59.253 "data_offset": 2048, 00:16:59.253 "data_size": 63488 00:16:59.253 }, 00:16:59.253 { 00:16:59.253 "name": "BaseBdev3", 00:16:59.253 "uuid": "57bf3178-e73f-4561-b764-7a93253d1785", 00:16:59.253 "is_configured": true, 00:16:59.253 "data_offset": 2048, 00:16:59.253 "data_size": 63488 00:16:59.253 } 00:16:59.253 ] 00:16:59.253 }' 00:16:59.253 02:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.253 02:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:59.820 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:00.079 [2024-07-11 02:22:50.311319] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.079 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:00.079 "name": "Existed_Raid", 00:17:00.079 "aliases": [ 00:17:00.079 "19fc4d6a-f1ad-4c8d-b064-af10f7b06f8a" 00:17:00.079 ], 00:17:00.079 "product_name": "Raid Volume", 00:17:00.079 "block_size": 512, 00:17:00.079 "num_blocks": 190464, 00:17:00.079 "uuid": "19fc4d6a-f1ad-4c8d-b064-af10f7b06f8a", 00:17:00.079 "assigned_rate_limits": { 00:17:00.079 "rw_ios_per_sec": 0, 00:17:00.079 "rw_mbytes_per_sec": 0, 00:17:00.079 "r_mbytes_per_sec": 0, 00:17:00.079 "w_mbytes_per_sec": 0 00:17:00.079 }, 00:17:00.079 "claimed": false, 00:17:00.079 "zoned": false, 00:17:00.079 "supported_io_types": { 00:17:00.079 "read": true, 00:17:00.079 "write": true, 00:17:00.079 "unmap": true, 00:17:00.079 "flush": true, 00:17:00.079 "reset": true, 00:17:00.079 "nvme_admin": false, 00:17:00.079 "nvme_io": false, 00:17:00.079 "nvme_io_md": false, 00:17:00.079 "write_zeroes": true, 00:17:00.079 "zcopy": false, 00:17:00.079 "get_zone_info": false, 00:17:00.079 "zone_management": false, 00:17:00.079 "zone_append": false, 00:17:00.079 "compare": false, 00:17:00.079 "compare_and_write": false, 00:17:00.079 "abort": false, 00:17:00.079 "seek_hole": false, 00:17:00.079 "seek_data": false, 00:17:00.079 "copy": false, 00:17:00.079 "nvme_iov_md": false 00:17:00.079 }, 00:17:00.079 "memory_domains": [ 00:17:00.079 { 00:17:00.079 "dma_device_id": "system", 00:17:00.079 "dma_device_type": 1 00:17:00.079 }, 00:17:00.079 { 00:17:00.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.079 "dma_device_type": 2 00:17:00.079 }, 00:17:00.079 { 00:17:00.079 "dma_device_id": "system", 00:17:00.079 "dma_device_type": 1 00:17:00.079 }, 00:17:00.079 { 00:17:00.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.079 "dma_device_type": 2 00:17:00.079 }, 00:17:00.079 { 00:17:00.079 "dma_device_id": "system", 00:17:00.079 "dma_device_type": 1 00:17:00.079 }, 00:17:00.079 { 00:17:00.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.079 "dma_device_type": 2 00:17:00.079 } 00:17:00.079 ], 00:17:00.079 "driver_specific": { 00:17:00.079 "raid": { 00:17:00.079 "uuid": "19fc4d6a-f1ad-4c8d-b064-af10f7b06f8a", 00:17:00.079 "strip_size_kb": 64, 00:17:00.079 "state": "online", 00:17:00.079 "raid_level": "raid0", 00:17:00.079 "superblock": true, 00:17:00.079 "num_base_bdevs": 3, 00:17:00.079 "num_base_bdevs_discovered": 3, 00:17:00.079 "num_base_bdevs_operational": 3, 00:17:00.079 "base_bdevs_list": [ 00:17:00.079 { 00:17:00.079 "name": "BaseBdev1", 00:17:00.079 "uuid": "4852419d-9b81-4288-b0fb-57a5a1013d20", 00:17:00.079 "is_configured": true, 00:17:00.079 "data_offset": 2048, 00:17:00.079 "data_size": 63488 00:17:00.079 }, 00:17:00.079 { 00:17:00.079 "name": "BaseBdev2", 00:17:00.079 "uuid": "b1289d4b-2730-450f-9ba7-0278ca5f4c5d", 00:17:00.079 "is_configured": true, 00:17:00.079 "data_offset": 2048, 00:17:00.079 "data_size": 63488 00:17:00.079 }, 00:17:00.079 { 00:17:00.079 "name": "BaseBdev3", 00:17:00.079 "uuid": "57bf3178-e73f-4561-b764-7a93253d1785", 00:17:00.079 "is_configured": true, 00:17:00.079 "data_offset": 2048, 00:17:00.079 "data_size": 63488 00:17:00.079 } 00:17:00.079 ] 00:17:00.079 } 00:17:00.079 } 00:17:00.079 }' 00:17:00.079 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.079 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:00.079 BaseBdev2 00:17:00.079 BaseBdev3' 00:17:00.079 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.079 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:00.079 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.339 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.339 "name": "BaseBdev1", 00:17:00.339 "aliases": [ 00:17:00.339 "4852419d-9b81-4288-b0fb-57a5a1013d20" 00:17:00.339 ], 00:17:00.339 "product_name": "Malloc disk", 00:17:00.339 "block_size": 512, 00:17:00.339 "num_blocks": 65536, 00:17:00.339 "uuid": "4852419d-9b81-4288-b0fb-57a5a1013d20", 00:17:00.339 "assigned_rate_limits": { 00:17:00.339 "rw_ios_per_sec": 0, 00:17:00.339 "rw_mbytes_per_sec": 0, 00:17:00.339 "r_mbytes_per_sec": 0, 00:17:00.339 "w_mbytes_per_sec": 0 00:17:00.339 }, 00:17:00.339 "claimed": true, 00:17:00.339 "claim_type": "exclusive_write", 00:17:00.339 "zoned": false, 00:17:00.339 "supported_io_types": { 00:17:00.339 "read": true, 00:17:00.339 "write": true, 00:17:00.339 "unmap": true, 00:17:00.339 "flush": true, 00:17:00.339 "reset": true, 00:17:00.339 "nvme_admin": false, 00:17:00.339 "nvme_io": false, 00:17:00.339 "nvme_io_md": false, 00:17:00.339 "write_zeroes": true, 00:17:00.339 "zcopy": true, 00:17:00.339 "get_zone_info": false, 00:17:00.339 "zone_management": false, 00:17:00.339 "zone_append": false, 00:17:00.339 "compare": false, 00:17:00.339 "compare_and_write": false, 00:17:00.339 "abort": true, 00:17:00.339 "seek_hole": false, 00:17:00.339 "seek_data": false, 00:17:00.339 "copy": true, 00:17:00.339 "nvme_iov_md": false 00:17:00.339 }, 00:17:00.339 "memory_domains": [ 00:17:00.339 { 00:17:00.339 "dma_device_id": "system", 00:17:00.339 "dma_device_type": 1 00:17:00.339 }, 00:17:00.339 { 00:17:00.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.339 "dma_device_type": 2 00:17:00.339 } 00:17:00.339 ], 00:17:00.339 "driver_specific": {} 00:17:00.339 }' 00:17:00.339 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.339 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.339 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.339 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.339 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:00.598 02:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.857 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.857 "name": "BaseBdev2", 00:17:00.857 "aliases": [ 00:17:00.857 "b1289d4b-2730-450f-9ba7-0278ca5f4c5d" 00:17:00.857 ], 00:17:00.857 "product_name": "Malloc disk", 00:17:00.857 "block_size": 512, 00:17:00.857 "num_blocks": 65536, 00:17:00.857 "uuid": "b1289d4b-2730-450f-9ba7-0278ca5f4c5d", 00:17:00.857 "assigned_rate_limits": { 00:17:00.857 "rw_ios_per_sec": 0, 00:17:00.857 "rw_mbytes_per_sec": 0, 00:17:00.857 "r_mbytes_per_sec": 0, 00:17:00.857 "w_mbytes_per_sec": 0 00:17:00.857 }, 00:17:00.857 "claimed": true, 00:17:00.857 "claim_type": "exclusive_write", 00:17:00.857 "zoned": false, 00:17:00.857 "supported_io_types": { 00:17:00.857 "read": true, 00:17:00.857 "write": true, 00:17:00.857 "unmap": true, 00:17:00.857 "flush": true, 00:17:00.857 "reset": true, 00:17:00.857 "nvme_admin": false, 00:17:00.857 "nvme_io": false, 00:17:00.857 "nvme_io_md": false, 00:17:00.857 "write_zeroes": true, 00:17:00.857 "zcopy": true, 00:17:00.857 "get_zone_info": false, 00:17:00.857 "zone_management": false, 00:17:00.857 "zone_append": false, 00:17:00.857 "compare": false, 00:17:00.857 "compare_and_write": false, 00:17:00.857 "abort": true, 00:17:00.857 "seek_hole": false, 00:17:00.857 "seek_data": false, 00:17:00.857 "copy": true, 00:17:00.857 "nvme_iov_md": false 00:17:00.857 }, 00:17:00.857 "memory_domains": [ 00:17:00.857 { 00:17:00.857 "dma_device_id": "system", 00:17:00.857 "dma_device_type": 1 00:17:00.857 }, 00:17:00.857 { 00:17:00.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.857 "dma_device_type": 2 00:17:00.857 } 00:17:00.857 ], 00:17:00.857 "driver_specific": {} 00:17:00.857 }' 00:17:00.857 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.857 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.116 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.116 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.117 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.117 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.117 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.117 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.117 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.117 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.376 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.376 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.376 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.376 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:01.376 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.635 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.635 "name": "BaseBdev3", 00:17:01.635 "aliases": [ 00:17:01.635 "57bf3178-e73f-4561-b764-7a93253d1785" 00:17:01.635 ], 00:17:01.635 "product_name": "Malloc disk", 00:17:01.635 "block_size": 512, 00:17:01.635 "num_blocks": 65536, 00:17:01.635 "uuid": "57bf3178-e73f-4561-b764-7a93253d1785", 00:17:01.635 "assigned_rate_limits": { 00:17:01.635 "rw_ios_per_sec": 0, 00:17:01.635 "rw_mbytes_per_sec": 0, 00:17:01.636 "r_mbytes_per_sec": 0, 00:17:01.636 "w_mbytes_per_sec": 0 00:17:01.636 }, 00:17:01.636 "claimed": true, 00:17:01.636 "claim_type": "exclusive_write", 00:17:01.636 "zoned": false, 00:17:01.636 "supported_io_types": { 00:17:01.636 "read": true, 00:17:01.636 "write": true, 00:17:01.636 "unmap": true, 00:17:01.636 "flush": true, 00:17:01.636 "reset": true, 00:17:01.636 "nvme_admin": false, 00:17:01.636 "nvme_io": false, 00:17:01.636 "nvme_io_md": false, 00:17:01.636 "write_zeroes": true, 00:17:01.636 "zcopy": true, 00:17:01.636 "get_zone_info": false, 00:17:01.636 "zone_management": false, 00:17:01.636 "zone_append": false, 00:17:01.636 "compare": false, 00:17:01.636 "compare_and_write": false, 00:17:01.636 "abort": true, 00:17:01.636 "seek_hole": false, 00:17:01.636 "seek_data": false, 00:17:01.636 "copy": true, 00:17:01.636 "nvme_iov_md": false 00:17:01.636 }, 00:17:01.636 "memory_domains": [ 00:17:01.636 { 00:17:01.636 "dma_device_id": "system", 00:17:01.636 "dma_device_type": 1 00:17:01.636 }, 00:17:01.636 { 00:17:01.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.636 "dma_device_type": 2 00:17:01.636 } 00:17:01.636 ], 00:17:01.636 "driver_specific": {} 00:17:01.636 }' 00:17:01.636 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.636 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.636 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.636 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.636 02:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.636 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.636 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.636 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.895 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.895 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.895 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.895 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.895 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:02.152 [2024-07-11 02:22:52.400655] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.152 [2024-07-11 02:22:52.400684] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.152 [2024-07-11 02:22:52.400728] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.152 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.153 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.411 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.411 "name": "Existed_Raid", 00:17:02.411 "uuid": "19fc4d6a-f1ad-4c8d-b064-af10f7b06f8a", 00:17:02.411 "strip_size_kb": 64, 00:17:02.411 "state": "offline", 00:17:02.411 "raid_level": "raid0", 00:17:02.411 "superblock": true, 00:17:02.411 "num_base_bdevs": 3, 00:17:02.411 "num_base_bdevs_discovered": 2, 00:17:02.411 "num_base_bdevs_operational": 2, 00:17:02.411 "base_bdevs_list": [ 00:17:02.411 { 00:17:02.411 "name": null, 00:17:02.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.411 "is_configured": false, 00:17:02.411 "data_offset": 2048, 00:17:02.411 "data_size": 63488 00:17:02.411 }, 00:17:02.411 { 00:17:02.411 "name": "BaseBdev2", 00:17:02.411 "uuid": "b1289d4b-2730-450f-9ba7-0278ca5f4c5d", 00:17:02.411 "is_configured": true, 00:17:02.411 "data_offset": 2048, 00:17:02.411 "data_size": 63488 00:17:02.411 }, 00:17:02.411 { 00:17:02.411 "name": "BaseBdev3", 00:17:02.411 "uuid": "57bf3178-e73f-4561-b764-7a93253d1785", 00:17:02.411 "is_configured": true, 00:17:02.411 "data_offset": 2048, 00:17:02.411 "data_size": 63488 00:17:02.411 } 00:17:02.411 ] 00:17:02.411 }' 00:17:02.411 02:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.411 02:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.976 02:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:02.976 02:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:02.976 02:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.976 02:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:03.235 02:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:03.235 02:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.235 02:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:03.812 [2024-07-11 02:22:54.002810] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:03.812 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:03.812 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:03.812 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.812 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:04.071 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:04.071 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.071 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:04.330 [2024-07-11 02:22:54.520078] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.330 [2024-07-11 02:22:54.520121] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd7dcf0 name Existed_Raid, state offline 00:17:04.330 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:04.330 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:04.330 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.330 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.588 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:04.588 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:04.588 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:04.588 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:04.588 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:04.588 02:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.846 BaseBdev2 00:17:04.846 02:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:04.846 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:04.846 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:04.846 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:04.846 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:04.846 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:04.846 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.105 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.364 [ 00:17:05.364 { 00:17:05.364 "name": "BaseBdev2", 00:17:05.364 "aliases": [ 00:17:05.364 "36c275bf-e5e7-4d88-9cd0-f09910da7064" 00:17:05.364 ], 00:17:05.364 "product_name": "Malloc disk", 00:17:05.364 "block_size": 512, 00:17:05.364 "num_blocks": 65536, 00:17:05.364 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:05.364 "assigned_rate_limits": { 00:17:05.364 "rw_ios_per_sec": 0, 00:17:05.364 "rw_mbytes_per_sec": 0, 00:17:05.364 "r_mbytes_per_sec": 0, 00:17:05.364 "w_mbytes_per_sec": 0 00:17:05.364 }, 00:17:05.364 "claimed": false, 00:17:05.364 "zoned": false, 00:17:05.364 "supported_io_types": { 00:17:05.364 "read": true, 00:17:05.364 "write": true, 00:17:05.364 "unmap": true, 00:17:05.364 "flush": true, 00:17:05.364 "reset": true, 00:17:05.364 "nvme_admin": false, 00:17:05.364 "nvme_io": false, 00:17:05.364 "nvme_io_md": false, 00:17:05.364 "write_zeroes": true, 00:17:05.364 "zcopy": true, 00:17:05.364 "get_zone_info": false, 00:17:05.364 "zone_management": false, 00:17:05.364 "zone_append": false, 00:17:05.364 "compare": false, 00:17:05.364 "compare_and_write": false, 00:17:05.364 "abort": true, 00:17:05.364 "seek_hole": false, 00:17:05.364 "seek_data": false, 00:17:05.364 "copy": true, 00:17:05.364 "nvme_iov_md": false 00:17:05.364 }, 00:17:05.364 "memory_domains": [ 00:17:05.364 { 00:17:05.364 "dma_device_id": "system", 00:17:05.364 "dma_device_type": 1 00:17:05.364 }, 00:17:05.364 { 00:17:05.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.364 "dma_device_type": 2 00:17:05.364 } 00:17:05.364 ], 00:17:05.364 "driver_specific": {} 00:17:05.364 } 00:17:05.364 ] 00:17:05.364 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:05.364 02:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:05.364 02:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:05.364 02:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:05.623 BaseBdev3 00:17:05.623 02:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:05.623 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:05.623 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:05.623 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:05.623 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:05.623 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:05.623 02:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.883 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.883 [ 00:17:05.883 { 00:17:05.883 "name": "BaseBdev3", 00:17:05.883 "aliases": [ 00:17:05.883 "25127b2b-0958-4267-9605-9cd75c492c68" 00:17:05.883 ], 00:17:05.883 "product_name": "Malloc disk", 00:17:05.883 "block_size": 512, 00:17:05.883 "num_blocks": 65536, 00:17:05.883 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:05.883 "assigned_rate_limits": { 00:17:05.883 "rw_ios_per_sec": 0, 00:17:05.883 "rw_mbytes_per_sec": 0, 00:17:05.883 "r_mbytes_per_sec": 0, 00:17:05.883 "w_mbytes_per_sec": 0 00:17:05.883 }, 00:17:05.883 "claimed": false, 00:17:05.883 "zoned": false, 00:17:05.883 "supported_io_types": { 00:17:05.883 "read": true, 00:17:05.883 "write": true, 00:17:05.883 "unmap": true, 00:17:05.883 "flush": true, 00:17:05.883 "reset": true, 00:17:05.883 "nvme_admin": false, 00:17:05.883 "nvme_io": false, 00:17:05.883 "nvme_io_md": false, 00:17:05.883 "write_zeroes": true, 00:17:05.883 "zcopy": true, 00:17:05.883 "get_zone_info": false, 00:17:05.883 "zone_management": false, 00:17:05.883 "zone_append": false, 00:17:05.883 "compare": false, 00:17:05.883 "compare_and_write": false, 00:17:05.883 "abort": true, 00:17:05.883 "seek_hole": false, 00:17:05.883 "seek_data": false, 00:17:05.883 "copy": true, 00:17:05.883 "nvme_iov_md": false 00:17:05.883 }, 00:17:05.883 "memory_domains": [ 00:17:05.883 { 00:17:05.883 "dma_device_id": "system", 00:17:05.883 "dma_device_type": 1 00:17:05.883 }, 00:17:05.883 { 00:17:05.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.883 "dma_device_type": 2 00:17:05.883 } 00:17:05.883 ], 00:17:05.883 "driver_specific": {} 00:17:05.883 } 00:17:05.883 ] 00:17:05.883 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:05.883 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:05.883 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:05.883 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:06.143 [2024-07-11 02:22:56.515203] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.143 [2024-07-11 02:22:56.515245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.143 [2024-07-11 02:22:56.515264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.143 [2024-07-11 02:22:56.516552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.143 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.403 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.403 "name": "Existed_Raid", 00:17:06.403 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:06.403 "strip_size_kb": 64, 00:17:06.403 "state": "configuring", 00:17:06.403 "raid_level": "raid0", 00:17:06.403 "superblock": true, 00:17:06.403 "num_base_bdevs": 3, 00:17:06.403 "num_base_bdevs_discovered": 2, 00:17:06.403 "num_base_bdevs_operational": 3, 00:17:06.403 "base_bdevs_list": [ 00:17:06.403 { 00:17:06.403 "name": "BaseBdev1", 00:17:06.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.403 "is_configured": false, 00:17:06.403 "data_offset": 0, 00:17:06.403 "data_size": 0 00:17:06.403 }, 00:17:06.403 { 00:17:06.403 "name": "BaseBdev2", 00:17:06.403 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:06.403 "is_configured": true, 00:17:06.403 "data_offset": 2048, 00:17:06.403 "data_size": 63488 00:17:06.403 }, 00:17:06.403 { 00:17:06.403 "name": "BaseBdev3", 00:17:06.403 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:06.403 "is_configured": true, 00:17:06.403 "data_offset": 2048, 00:17:06.403 "data_size": 63488 00:17:06.403 } 00:17:06.403 ] 00:17:06.403 }' 00:17:06.403 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.403 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.982 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:07.241 [2024-07-11 02:22:57.526080] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.241 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.501 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.501 "name": "Existed_Raid", 00:17:07.501 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:07.501 "strip_size_kb": 64, 00:17:07.501 "state": "configuring", 00:17:07.501 "raid_level": "raid0", 00:17:07.501 "superblock": true, 00:17:07.501 "num_base_bdevs": 3, 00:17:07.501 "num_base_bdevs_discovered": 1, 00:17:07.501 "num_base_bdevs_operational": 3, 00:17:07.501 "base_bdevs_list": [ 00:17:07.501 { 00:17:07.501 "name": "BaseBdev1", 00:17:07.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.501 "is_configured": false, 00:17:07.501 "data_offset": 0, 00:17:07.501 "data_size": 0 00:17:07.501 }, 00:17:07.501 { 00:17:07.501 "name": null, 00:17:07.501 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:07.501 "is_configured": false, 00:17:07.501 "data_offset": 2048, 00:17:07.501 "data_size": 63488 00:17:07.501 }, 00:17:07.501 { 00:17:07.501 "name": "BaseBdev3", 00:17:07.501 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:07.501 "is_configured": true, 00:17:07.501 "data_offset": 2048, 00:17:07.501 "data_size": 63488 00:17:07.501 } 00:17:07.501 ] 00:17:07.501 }' 00:17:07.501 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.501 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.069 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.069 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:08.328 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:08.328 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.328 [2024-07-11 02:22:58.748626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.328 BaseBdev1 00:17:08.588 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:08.588 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:08.588 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:08.588 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:08.588 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:08.588 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:08.588 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.588 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:08.847 [ 00:17:08.847 { 00:17:08.847 "name": "BaseBdev1", 00:17:08.847 "aliases": [ 00:17:08.847 "6085d1de-ad0a-4f72-85bc-921be6f17f27" 00:17:08.847 ], 00:17:08.847 "product_name": "Malloc disk", 00:17:08.847 "block_size": 512, 00:17:08.847 "num_blocks": 65536, 00:17:08.847 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:08.847 "assigned_rate_limits": { 00:17:08.847 "rw_ios_per_sec": 0, 00:17:08.847 "rw_mbytes_per_sec": 0, 00:17:08.847 "r_mbytes_per_sec": 0, 00:17:08.847 "w_mbytes_per_sec": 0 00:17:08.847 }, 00:17:08.847 "claimed": true, 00:17:08.847 "claim_type": "exclusive_write", 00:17:08.847 "zoned": false, 00:17:08.847 "supported_io_types": { 00:17:08.847 "read": true, 00:17:08.847 "write": true, 00:17:08.847 "unmap": true, 00:17:08.847 "flush": true, 00:17:08.847 "reset": true, 00:17:08.847 "nvme_admin": false, 00:17:08.847 "nvme_io": false, 00:17:08.847 "nvme_io_md": false, 00:17:08.847 "write_zeroes": true, 00:17:08.847 "zcopy": true, 00:17:08.847 "get_zone_info": false, 00:17:08.847 "zone_management": false, 00:17:08.847 "zone_append": false, 00:17:08.847 "compare": false, 00:17:08.847 "compare_and_write": false, 00:17:08.847 "abort": true, 00:17:08.847 "seek_hole": false, 00:17:08.847 "seek_data": false, 00:17:08.847 "copy": true, 00:17:08.847 "nvme_iov_md": false 00:17:08.847 }, 00:17:08.847 "memory_domains": [ 00:17:08.847 { 00:17:08.847 "dma_device_id": "system", 00:17:08.847 "dma_device_type": 1 00:17:08.847 }, 00:17:08.847 { 00:17:08.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.847 "dma_device_type": 2 00:17:08.847 } 00:17:08.847 ], 00:17:08.847 "driver_specific": {} 00:17:08.847 } 00:17:08.847 ] 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.847 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.107 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.107 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.107 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.107 "name": "Existed_Raid", 00:17:09.107 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:09.107 "strip_size_kb": 64, 00:17:09.107 "state": "configuring", 00:17:09.107 "raid_level": "raid0", 00:17:09.107 "superblock": true, 00:17:09.107 "num_base_bdevs": 3, 00:17:09.107 "num_base_bdevs_discovered": 2, 00:17:09.107 "num_base_bdevs_operational": 3, 00:17:09.107 "base_bdevs_list": [ 00:17:09.107 { 00:17:09.107 "name": "BaseBdev1", 00:17:09.107 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:09.107 "is_configured": true, 00:17:09.107 "data_offset": 2048, 00:17:09.107 "data_size": 63488 00:17:09.107 }, 00:17:09.107 { 00:17:09.107 "name": null, 00:17:09.107 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:09.107 "is_configured": false, 00:17:09.107 "data_offset": 2048, 00:17:09.107 "data_size": 63488 00:17:09.107 }, 00:17:09.107 { 00:17:09.107 "name": "BaseBdev3", 00:17:09.107 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:09.107 "is_configured": true, 00:17:09.107 "data_offset": 2048, 00:17:09.107 "data_size": 63488 00:17:09.107 } 00:17:09.107 ] 00:17:09.107 }' 00:17:09.107 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.107 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.043 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.043 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:10.043 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:10.043 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:10.302 [2024-07-11 02:23:00.633647] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.302 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.562 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.562 "name": "Existed_Raid", 00:17:10.562 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:10.562 "strip_size_kb": 64, 00:17:10.562 "state": "configuring", 00:17:10.562 "raid_level": "raid0", 00:17:10.562 "superblock": true, 00:17:10.562 "num_base_bdevs": 3, 00:17:10.562 "num_base_bdevs_discovered": 1, 00:17:10.562 "num_base_bdevs_operational": 3, 00:17:10.562 "base_bdevs_list": [ 00:17:10.562 { 00:17:10.562 "name": "BaseBdev1", 00:17:10.562 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:10.562 "is_configured": true, 00:17:10.562 "data_offset": 2048, 00:17:10.562 "data_size": 63488 00:17:10.562 }, 00:17:10.562 { 00:17:10.562 "name": null, 00:17:10.562 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:10.562 "is_configured": false, 00:17:10.562 "data_offset": 2048, 00:17:10.562 "data_size": 63488 00:17:10.562 }, 00:17:10.562 { 00:17:10.562 "name": null, 00:17:10.562 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:10.562 "is_configured": false, 00:17:10.562 "data_offset": 2048, 00:17:10.562 "data_size": 63488 00:17:10.562 } 00:17:10.562 ] 00:17:10.562 }' 00:17:10.562 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.562 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.130 02:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:11.130 02:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.388 02:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:11.388 02:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:11.647 [2024-07-11 02:23:01.981241] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.647 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.906 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.906 "name": "Existed_Raid", 00:17:11.906 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:11.906 "strip_size_kb": 64, 00:17:11.906 "state": "configuring", 00:17:11.906 "raid_level": "raid0", 00:17:11.906 "superblock": true, 00:17:11.906 "num_base_bdevs": 3, 00:17:11.906 "num_base_bdevs_discovered": 2, 00:17:11.906 "num_base_bdevs_operational": 3, 00:17:11.906 "base_bdevs_list": [ 00:17:11.906 { 00:17:11.906 "name": "BaseBdev1", 00:17:11.906 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:11.906 "is_configured": true, 00:17:11.906 "data_offset": 2048, 00:17:11.906 "data_size": 63488 00:17:11.906 }, 00:17:11.906 { 00:17:11.906 "name": null, 00:17:11.906 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:11.906 "is_configured": false, 00:17:11.906 "data_offset": 2048, 00:17:11.906 "data_size": 63488 00:17:11.906 }, 00:17:11.906 { 00:17:11.906 "name": "BaseBdev3", 00:17:11.906 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:11.906 "is_configured": true, 00:17:11.906 "data_offset": 2048, 00:17:11.906 "data_size": 63488 00:17:11.906 } 00:17:11.906 ] 00:17:11.906 }' 00:17:11.906 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.906 02:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.473 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.473 02:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.732 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:12.732 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:12.991 [2024-07-11 02:23:03.332861] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.991 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.250 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.250 "name": "Existed_Raid", 00:17:13.250 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:13.250 "strip_size_kb": 64, 00:17:13.250 "state": "configuring", 00:17:13.250 "raid_level": "raid0", 00:17:13.250 "superblock": true, 00:17:13.250 "num_base_bdevs": 3, 00:17:13.250 "num_base_bdevs_discovered": 1, 00:17:13.250 "num_base_bdevs_operational": 3, 00:17:13.250 "base_bdevs_list": [ 00:17:13.250 { 00:17:13.250 "name": null, 00:17:13.250 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:13.250 "is_configured": false, 00:17:13.250 "data_offset": 2048, 00:17:13.250 "data_size": 63488 00:17:13.250 }, 00:17:13.250 { 00:17:13.250 "name": null, 00:17:13.250 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:13.250 "is_configured": false, 00:17:13.250 "data_offset": 2048, 00:17:13.250 "data_size": 63488 00:17:13.250 }, 00:17:13.250 { 00:17:13.250 "name": "BaseBdev3", 00:17:13.250 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:13.250 "is_configured": true, 00:17:13.250 "data_offset": 2048, 00:17:13.250 "data_size": 63488 00:17:13.250 } 00:17:13.250 ] 00:17:13.250 }' 00:17:13.250 02:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.250 02:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.818 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.818 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.077 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:14.077 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:14.337 [2024-07-11 02:23:04.676514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.337 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.596 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.596 "name": "Existed_Raid", 00:17:14.596 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:14.596 "strip_size_kb": 64, 00:17:14.596 "state": "configuring", 00:17:14.596 "raid_level": "raid0", 00:17:14.596 "superblock": true, 00:17:14.596 "num_base_bdevs": 3, 00:17:14.596 "num_base_bdevs_discovered": 2, 00:17:14.596 "num_base_bdevs_operational": 3, 00:17:14.596 "base_bdevs_list": [ 00:17:14.596 { 00:17:14.596 "name": null, 00:17:14.596 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:14.596 "is_configured": false, 00:17:14.596 "data_offset": 2048, 00:17:14.596 "data_size": 63488 00:17:14.596 }, 00:17:14.596 { 00:17:14.596 "name": "BaseBdev2", 00:17:14.596 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:14.596 "is_configured": true, 00:17:14.596 "data_offset": 2048, 00:17:14.596 "data_size": 63488 00:17:14.596 }, 00:17:14.596 { 00:17:14.596 "name": "BaseBdev3", 00:17:14.596 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:14.596 "is_configured": true, 00:17:14.596 "data_offset": 2048, 00:17:14.596 "data_size": 63488 00:17:14.596 } 00:17:14.596 ] 00:17:14.596 }' 00:17:14.596 02:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.596 02:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.163 02:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.163 02:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:15.476 02:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:15.476 02:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.476 02:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:15.750 02:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6085d1de-ad0a-4f72-85bc-921be6f17f27 00:17:16.008 [2024-07-11 02:23:06.207936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:16.008 [2024-07-11 02:23:06.208082] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbcea70 00:17:16.008 [2024-07-11 02:23:06.208095] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:16.008 [2024-07-11 02:23:06.208270] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbca850 00:17:16.008 [2024-07-11 02:23:06.208391] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbcea70 00:17:16.008 [2024-07-11 02:23:06.208401] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xbcea70 00:17:16.008 [2024-07-11 02:23:06.208490] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.008 NewBaseBdev 00:17:16.008 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:16.008 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:16.008 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:16.008 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:16.008 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:16.008 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:16.008 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.267 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:16.526 [ 00:17:16.526 { 00:17:16.526 "name": "NewBaseBdev", 00:17:16.526 "aliases": [ 00:17:16.526 "6085d1de-ad0a-4f72-85bc-921be6f17f27" 00:17:16.526 ], 00:17:16.526 "product_name": "Malloc disk", 00:17:16.526 "block_size": 512, 00:17:16.526 "num_blocks": 65536, 00:17:16.526 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:16.526 "assigned_rate_limits": { 00:17:16.526 "rw_ios_per_sec": 0, 00:17:16.526 "rw_mbytes_per_sec": 0, 00:17:16.526 "r_mbytes_per_sec": 0, 00:17:16.526 "w_mbytes_per_sec": 0 00:17:16.526 }, 00:17:16.526 "claimed": true, 00:17:16.526 "claim_type": "exclusive_write", 00:17:16.526 "zoned": false, 00:17:16.526 "supported_io_types": { 00:17:16.526 "read": true, 00:17:16.526 "write": true, 00:17:16.526 "unmap": true, 00:17:16.526 "flush": true, 00:17:16.526 "reset": true, 00:17:16.526 "nvme_admin": false, 00:17:16.526 "nvme_io": false, 00:17:16.526 "nvme_io_md": false, 00:17:16.526 "write_zeroes": true, 00:17:16.526 "zcopy": true, 00:17:16.526 "get_zone_info": false, 00:17:16.526 "zone_management": false, 00:17:16.526 "zone_append": false, 00:17:16.526 "compare": false, 00:17:16.526 "compare_and_write": false, 00:17:16.526 "abort": true, 00:17:16.526 "seek_hole": false, 00:17:16.526 "seek_data": false, 00:17:16.526 "copy": true, 00:17:16.526 "nvme_iov_md": false 00:17:16.526 }, 00:17:16.526 "memory_domains": [ 00:17:16.526 { 00:17:16.526 "dma_device_id": "system", 00:17:16.526 "dma_device_type": 1 00:17:16.526 }, 00:17:16.526 { 00:17:16.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.526 "dma_device_type": 2 00:17:16.526 } 00:17:16.526 ], 00:17:16.526 "driver_specific": {} 00:17:16.526 } 00:17:16.526 ] 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.526 "name": "Existed_Raid", 00:17:16.526 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:16.526 "strip_size_kb": 64, 00:17:16.526 "state": "online", 00:17:16.526 "raid_level": "raid0", 00:17:16.526 "superblock": true, 00:17:16.526 "num_base_bdevs": 3, 00:17:16.526 "num_base_bdevs_discovered": 3, 00:17:16.526 "num_base_bdevs_operational": 3, 00:17:16.526 "base_bdevs_list": [ 00:17:16.526 { 00:17:16.526 "name": "NewBaseBdev", 00:17:16.526 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:16.526 "is_configured": true, 00:17:16.526 "data_offset": 2048, 00:17:16.526 "data_size": 63488 00:17:16.526 }, 00:17:16.526 { 00:17:16.526 "name": "BaseBdev2", 00:17:16.526 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:16.526 "is_configured": true, 00:17:16.526 "data_offset": 2048, 00:17:16.526 "data_size": 63488 00:17:16.526 }, 00:17:16.526 { 00:17:16.526 "name": "BaseBdev3", 00:17:16.526 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:16.526 "is_configured": true, 00:17:16.526 "data_offset": 2048, 00:17:16.526 "data_size": 63488 00:17:16.526 } 00:17:16.526 ] 00:17:16.526 }' 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.526 02:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:17.094 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:17.354 [2024-07-11 02:23:07.620008] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.354 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:17.354 "name": "Existed_Raid", 00:17:17.354 "aliases": [ 00:17:17.354 "e0111533-113f-426c-94cd-b65a2d45a30c" 00:17:17.354 ], 00:17:17.354 "product_name": "Raid Volume", 00:17:17.354 "block_size": 512, 00:17:17.354 "num_blocks": 190464, 00:17:17.354 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:17.354 "assigned_rate_limits": { 00:17:17.354 "rw_ios_per_sec": 0, 00:17:17.354 "rw_mbytes_per_sec": 0, 00:17:17.354 "r_mbytes_per_sec": 0, 00:17:17.354 "w_mbytes_per_sec": 0 00:17:17.354 }, 00:17:17.354 "claimed": false, 00:17:17.354 "zoned": false, 00:17:17.354 "supported_io_types": { 00:17:17.354 "read": true, 00:17:17.354 "write": true, 00:17:17.354 "unmap": true, 00:17:17.354 "flush": true, 00:17:17.354 "reset": true, 00:17:17.354 "nvme_admin": false, 00:17:17.354 "nvme_io": false, 00:17:17.354 "nvme_io_md": false, 00:17:17.354 "write_zeroes": true, 00:17:17.354 "zcopy": false, 00:17:17.354 "get_zone_info": false, 00:17:17.354 "zone_management": false, 00:17:17.354 "zone_append": false, 00:17:17.354 "compare": false, 00:17:17.354 "compare_and_write": false, 00:17:17.354 "abort": false, 00:17:17.354 "seek_hole": false, 00:17:17.354 "seek_data": false, 00:17:17.354 "copy": false, 00:17:17.354 "nvme_iov_md": false 00:17:17.354 }, 00:17:17.354 "memory_domains": [ 00:17:17.354 { 00:17:17.354 "dma_device_id": "system", 00:17:17.354 "dma_device_type": 1 00:17:17.354 }, 00:17:17.354 { 00:17:17.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.354 "dma_device_type": 2 00:17:17.354 }, 00:17:17.354 { 00:17:17.354 "dma_device_id": "system", 00:17:17.354 "dma_device_type": 1 00:17:17.354 }, 00:17:17.354 { 00:17:17.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.354 "dma_device_type": 2 00:17:17.354 }, 00:17:17.354 { 00:17:17.354 "dma_device_id": "system", 00:17:17.354 "dma_device_type": 1 00:17:17.354 }, 00:17:17.354 { 00:17:17.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.354 "dma_device_type": 2 00:17:17.354 } 00:17:17.354 ], 00:17:17.354 "driver_specific": { 00:17:17.354 "raid": { 00:17:17.354 "uuid": "e0111533-113f-426c-94cd-b65a2d45a30c", 00:17:17.354 "strip_size_kb": 64, 00:17:17.354 "state": "online", 00:17:17.354 "raid_level": "raid0", 00:17:17.354 "superblock": true, 00:17:17.354 "num_base_bdevs": 3, 00:17:17.354 "num_base_bdevs_discovered": 3, 00:17:17.354 "num_base_bdevs_operational": 3, 00:17:17.354 "base_bdevs_list": [ 00:17:17.354 { 00:17:17.354 "name": "NewBaseBdev", 00:17:17.354 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:17.354 "is_configured": true, 00:17:17.354 "data_offset": 2048, 00:17:17.354 "data_size": 63488 00:17:17.354 }, 00:17:17.354 { 00:17:17.354 "name": "BaseBdev2", 00:17:17.354 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:17.354 "is_configured": true, 00:17:17.354 "data_offset": 2048, 00:17:17.354 "data_size": 63488 00:17:17.354 }, 00:17:17.354 { 00:17:17.354 "name": "BaseBdev3", 00:17:17.355 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:17.355 "is_configured": true, 00:17:17.355 "data_offset": 2048, 00:17:17.355 "data_size": 63488 00:17:17.355 } 00:17:17.355 ] 00:17:17.355 } 00:17:17.355 } 00:17:17.355 }' 00:17:17.355 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.355 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:17.355 BaseBdev2 00:17:17.355 BaseBdev3' 00:17:17.355 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.355 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:17.355 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.614 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.614 "name": "NewBaseBdev", 00:17:17.614 "aliases": [ 00:17:17.614 "6085d1de-ad0a-4f72-85bc-921be6f17f27" 00:17:17.614 ], 00:17:17.614 "product_name": "Malloc disk", 00:17:17.614 "block_size": 512, 00:17:17.614 "num_blocks": 65536, 00:17:17.614 "uuid": "6085d1de-ad0a-4f72-85bc-921be6f17f27", 00:17:17.614 "assigned_rate_limits": { 00:17:17.614 "rw_ios_per_sec": 0, 00:17:17.614 "rw_mbytes_per_sec": 0, 00:17:17.614 "r_mbytes_per_sec": 0, 00:17:17.614 "w_mbytes_per_sec": 0 00:17:17.614 }, 00:17:17.614 "claimed": true, 00:17:17.614 "claim_type": "exclusive_write", 00:17:17.614 "zoned": false, 00:17:17.614 "supported_io_types": { 00:17:17.614 "read": true, 00:17:17.614 "write": true, 00:17:17.614 "unmap": true, 00:17:17.614 "flush": true, 00:17:17.614 "reset": true, 00:17:17.614 "nvme_admin": false, 00:17:17.614 "nvme_io": false, 00:17:17.614 "nvme_io_md": false, 00:17:17.614 "write_zeroes": true, 00:17:17.614 "zcopy": true, 00:17:17.614 "get_zone_info": false, 00:17:17.614 "zone_management": false, 00:17:17.614 "zone_append": false, 00:17:17.614 "compare": false, 00:17:17.614 "compare_and_write": false, 00:17:17.614 "abort": true, 00:17:17.614 "seek_hole": false, 00:17:17.614 "seek_data": false, 00:17:17.614 "copy": true, 00:17:17.614 "nvme_iov_md": false 00:17:17.614 }, 00:17:17.614 "memory_domains": [ 00:17:17.614 { 00:17:17.614 "dma_device_id": "system", 00:17:17.614 "dma_device_type": 1 00:17:17.614 }, 00:17:17.614 { 00:17:17.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.614 "dma_device_type": 2 00:17:17.614 } 00:17:17.614 ], 00:17:17.614 "driver_specific": {} 00:17:17.614 }' 00:17:17.614 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.614 02:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.614 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.614 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.873 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.873 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.873 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.873 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.873 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.873 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.873 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.132 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.132 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.132 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:18.132 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.132 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.132 "name": "BaseBdev2", 00:17:18.132 "aliases": [ 00:17:18.132 "36c275bf-e5e7-4d88-9cd0-f09910da7064" 00:17:18.132 ], 00:17:18.132 "product_name": "Malloc disk", 00:17:18.132 "block_size": 512, 00:17:18.132 "num_blocks": 65536, 00:17:18.132 "uuid": "36c275bf-e5e7-4d88-9cd0-f09910da7064", 00:17:18.132 "assigned_rate_limits": { 00:17:18.132 "rw_ios_per_sec": 0, 00:17:18.132 "rw_mbytes_per_sec": 0, 00:17:18.132 "r_mbytes_per_sec": 0, 00:17:18.132 "w_mbytes_per_sec": 0 00:17:18.132 }, 00:17:18.132 "claimed": true, 00:17:18.132 "claim_type": "exclusive_write", 00:17:18.132 "zoned": false, 00:17:18.132 "supported_io_types": { 00:17:18.132 "read": true, 00:17:18.132 "write": true, 00:17:18.132 "unmap": true, 00:17:18.132 "flush": true, 00:17:18.132 "reset": true, 00:17:18.132 "nvme_admin": false, 00:17:18.132 "nvme_io": false, 00:17:18.132 "nvme_io_md": false, 00:17:18.132 "write_zeroes": true, 00:17:18.132 "zcopy": true, 00:17:18.132 "get_zone_info": false, 00:17:18.132 "zone_management": false, 00:17:18.132 "zone_append": false, 00:17:18.132 "compare": false, 00:17:18.132 "compare_and_write": false, 00:17:18.132 "abort": true, 00:17:18.133 "seek_hole": false, 00:17:18.133 "seek_data": false, 00:17:18.133 "copy": true, 00:17:18.133 "nvme_iov_md": false 00:17:18.133 }, 00:17:18.133 "memory_domains": [ 00:17:18.133 { 00:17:18.133 "dma_device_id": "system", 00:17:18.133 "dma_device_type": 1 00:17:18.133 }, 00:17:18.133 { 00:17:18.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.133 "dma_device_type": 2 00:17:18.133 } 00:17:18.133 ], 00:17:18.133 "driver_specific": {} 00:17:18.133 }' 00:17:18.133 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.391 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.391 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.391 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.391 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.391 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.391 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.391 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.650 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.650 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.650 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.650 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.650 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.650 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:18.650 02:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.910 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.910 "name": "BaseBdev3", 00:17:18.910 "aliases": [ 00:17:18.910 "25127b2b-0958-4267-9605-9cd75c492c68" 00:17:18.910 ], 00:17:18.910 "product_name": "Malloc disk", 00:17:18.910 "block_size": 512, 00:17:18.910 "num_blocks": 65536, 00:17:18.910 "uuid": "25127b2b-0958-4267-9605-9cd75c492c68", 00:17:18.910 "assigned_rate_limits": { 00:17:18.910 "rw_ios_per_sec": 0, 00:17:18.910 "rw_mbytes_per_sec": 0, 00:17:18.910 "r_mbytes_per_sec": 0, 00:17:18.910 "w_mbytes_per_sec": 0 00:17:18.910 }, 00:17:18.910 "claimed": true, 00:17:18.910 "claim_type": "exclusive_write", 00:17:18.910 "zoned": false, 00:17:18.910 "supported_io_types": { 00:17:18.910 "read": true, 00:17:18.910 "write": true, 00:17:18.910 "unmap": true, 00:17:18.910 "flush": true, 00:17:18.910 "reset": true, 00:17:18.910 "nvme_admin": false, 00:17:18.910 "nvme_io": false, 00:17:18.910 "nvme_io_md": false, 00:17:18.910 "write_zeroes": true, 00:17:18.910 "zcopy": true, 00:17:18.910 "get_zone_info": false, 00:17:18.910 "zone_management": false, 00:17:18.910 "zone_append": false, 00:17:18.910 "compare": false, 00:17:18.910 "compare_and_write": false, 00:17:18.910 "abort": true, 00:17:18.910 "seek_hole": false, 00:17:18.910 "seek_data": false, 00:17:18.910 "copy": true, 00:17:18.910 "nvme_iov_md": false 00:17:18.910 }, 00:17:18.910 "memory_domains": [ 00:17:18.910 { 00:17:18.910 "dma_device_id": "system", 00:17:18.910 "dma_device_type": 1 00:17:18.910 }, 00:17:18.910 { 00:17:18.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.910 "dma_device_type": 2 00:17:18.910 } 00:17:18.910 ], 00:17:18.910 "driver_specific": {} 00:17:18.910 }' 00:17:18.910 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.910 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.910 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.910 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.910 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.169 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.429 [2024-07-11 02:23:09.733543] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.429 [2024-07-11 02:23:09.733574] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.429 [2024-07-11 02:23:09.733628] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.429 [2024-07-11 02:23:09.733681] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.429 [2024-07-11 02:23:09.733693] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbcea70 name Existed_Raid, state offline 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1917982 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1917982 ']' 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1917982 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1917982 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1917982' 00:17:19.429 killing process with pid 1917982 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1917982 00:17:19.429 [2024-07-11 02:23:09.800160] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.429 02:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1917982 00:17:19.429 [2024-07-11 02:23:09.825649] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.688 02:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:19.688 00:17:19.688 real 0m28.422s 00:17:19.688 user 0m51.960s 00:17:19.688 sys 0m5.292s 00:17:19.688 02:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.688 02:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.688 ************************************ 00:17:19.688 END TEST raid_state_function_test_sb 00:17:19.688 ************************************ 00:17:19.688 02:23:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:19.688 02:23:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:19.688 02:23:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:19.688 02:23:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.688 02:23:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.688 ************************************ 00:17:19.688 START TEST raid_superblock_test 00:17:19.688 ************************************ 00:17:19.688 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:17:19.688 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:19.688 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:19.688 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:19.688 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:19.688 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:19.688 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:19.689 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:19.689 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:19.689 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1922221 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1922221 /var/tmp/spdk-raid.sock 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1922221 ']' 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:19.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.948 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 [2024-07-11 02:23:10.171119] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:17:19.948 [2024-07-11 02:23:10.171181] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922221 ] 00:17:19.948 [2024-07-11 02:23:10.301456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.948 [2024-07-11 02:23:10.349767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.207 [2024-07-11 02:23:10.412756] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.207 [2024-07-11 02:23:10.412801] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.207 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:20.467 malloc1 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.467 [2024-07-11 02:23:10.800122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.467 [2024-07-11 02:23:10.800173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.467 [2024-07-11 02:23:10.800198] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1539de0 00:17:20.467 [2024-07-11 02:23:10.800211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.467 [2024-07-11 02:23:10.801717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.467 [2024-07-11 02:23:10.801744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.467 pt1 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.467 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:20.726 malloc2 00:17:20.726 02:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.726 [2024-07-11 02:23:11.149686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.726 [2024-07-11 02:23:11.149727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.726 [2024-07-11 02:23:11.149743] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1531380 00:17:20.726 [2024-07-11 02:23:11.149756] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.985 [2024-07-11 02:23:11.151076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.985 [2024-07-11 02:23:11.151103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.985 pt2 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:20.985 malloc3 00:17:20.985 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:21.244 [2024-07-11 02:23:11.515047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:21.244 [2024-07-11 02:23:11.515091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.244 [2024-07-11 02:23:11.515107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1533fb0 00:17:21.244 [2024-07-11 02:23:11.515120] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.244 [2024-07-11 02:23:11.516518] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.244 [2024-07-11 02:23:11.516552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:21.244 pt3 00:17:21.244 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:21.244 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:21.244 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:21.502 [2024-07-11 02:23:11.779771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:21.502 [2024-07-11 02:23:11.781058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.502 [2024-07-11 02:23:11.781112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:21.502 [2024-07-11 02:23:11.781255] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15362d0 00:17:21.502 [2024-07-11 02:23:11.781266] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:21.502 [2024-07-11 02:23:11.781462] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1531d40 00:17:21.502 [2024-07-11 02:23:11.781595] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15362d0 00:17:21.502 [2024-07-11 02:23:11.781605] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15362d0 00:17:21.502 [2024-07-11 02:23:11.781698] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.502 02:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.761 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.761 "name": "raid_bdev1", 00:17:21.761 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:21.761 "strip_size_kb": 64, 00:17:21.761 "state": "online", 00:17:21.761 "raid_level": "raid0", 00:17:21.761 "superblock": true, 00:17:21.761 "num_base_bdevs": 3, 00:17:21.761 "num_base_bdevs_discovered": 3, 00:17:21.761 "num_base_bdevs_operational": 3, 00:17:21.761 "base_bdevs_list": [ 00:17:21.761 { 00:17:21.761 "name": "pt1", 00:17:21.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.761 "is_configured": true, 00:17:21.761 "data_offset": 2048, 00:17:21.761 "data_size": 63488 00:17:21.761 }, 00:17:21.761 { 00:17:21.761 "name": "pt2", 00:17:21.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.761 "is_configured": true, 00:17:21.761 "data_offset": 2048, 00:17:21.761 "data_size": 63488 00:17:21.761 }, 00:17:21.761 { 00:17:21.761 "name": "pt3", 00:17:21.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.761 "is_configured": true, 00:17:21.761 "data_offset": 2048, 00:17:21.761 "data_size": 63488 00:17:21.761 } 00:17:21.761 ] 00:17:21.761 }' 00:17:21.761 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.761 02:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.329 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:22.589 [2024-07-11 02:23:12.854864] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.589 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:22.589 "name": "raid_bdev1", 00:17:22.589 "aliases": [ 00:17:22.589 "6d242fbf-c068-491f-98fd-72dd92279e41" 00:17:22.589 ], 00:17:22.589 "product_name": "Raid Volume", 00:17:22.589 "block_size": 512, 00:17:22.589 "num_blocks": 190464, 00:17:22.589 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:22.589 "assigned_rate_limits": { 00:17:22.589 "rw_ios_per_sec": 0, 00:17:22.589 "rw_mbytes_per_sec": 0, 00:17:22.589 "r_mbytes_per_sec": 0, 00:17:22.589 "w_mbytes_per_sec": 0 00:17:22.589 }, 00:17:22.589 "claimed": false, 00:17:22.589 "zoned": false, 00:17:22.589 "supported_io_types": { 00:17:22.589 "read": true, 00:17:22.589 "write": true, 00:17:22.589 "unmap": true, 00:17:22.589 "flush": true, 00:17:22.589 "reset": true, 00:17:22.589 "nvme_admin": false, 00:17:22.589 "nvme_io": false, 00:17:22.589 "nvme_io_md": false, 00:17:22.589 "write_zeroes": true, 00:17:22.589 "zcopy": false, 00:17:22.589 "get_zone_info": false, 00:17:22.589 "zone_management": false, 00:17:22.589 "zone_append": false, 00:17:22.589 "compare": false, 00:17:22.589 "compare_and_write": false, 00:17:22.589 "abort": false, 00:17:22.589 "seek_hole": false, 00:17:22.589 "seek_data": false, 00:17:22.589 "copy": false, 00:17:22.589 "nvme_iov_md": false 00:17:22.589 }, 00:17:22.589 "memory_domains": [ 00:17:22.589 { 00:17:22.589 "dma_device_id": "system", 00:17:22.589 "dma_device_type": 1 00:17:22.589 }, 00:17:22.589 { 00:17:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.589 "dma_device_type": 2 00:17:22.589 }, 00:17:22.589 { 00:17:22.589 "dma_device_id": "system", 00:17:22.589 "dma_device_type": 1 00:17:22.589 }, 00:17:22.589 { 00:17:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.589 "dma_device_type": 2 00:17:22.589 }, 00:17:22.589 { 00:17:22.589 "dma_device_id": "system", 00:17:22.589 "dma_device_type": 1 00:17:22.589 }, 00:17:22.589 { 00:17:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.589 "dma_device_type": 2 00:17:22.589 } 00:17:22.589 ], 00:17:22.589 "driver_specific": { 00:17:22.589 "raid": { 00:17:22.589 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:22.589 "strip_size_kb": 64, 00:17:22.589 "state": "online", 00:17:22.589 "raid_level": "raid0", 00:17:22.589 "superblock": true, 00:17:22.589 "num_base_bdevs": 3, 00:17:22.589 "num_base_bdevs_discovered": 3, 00:17:22.589 "num_base_bdevs_operational": 3, 00:17:22.589 "base_bdevs_list": [ 00:17:22.589 { 00:17:22.589 "name": "pt1", 00:17:22.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.589 "is_configured": true, 00:17:22.589 "data_offset": 2048, 00:17:22.589 "data_size": 63488 00:17:22.589 }, 00:17:22.589 { 00:17:22.589 "name": "pt2", 00:17:22.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.589 "is_configured": true, 00:17:22.589 "data_offset": 2048, 00:17:22.589 "data_size": 63488 00:17:22.589 }, 00:17:22.589 { 00:17:22.589 "name": "pt3", 00:17:22.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.589 "is_configured": true, 00:17:22.589 "data_offset": 2048, 00:17:22.589 "data_size": 63488 00:17:22.589 } 00:17:22.589 ] 00:17:22.589 } 00:17:22.589 } 00:17:22.589 }' 00:17:22.589 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.589 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:22.589 pt2 00:17:22.589 pt3' 00:17:22.589 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.589 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.589 02:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:22.848 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.848 "name": "pt1", 00:17:22.848 "aliases": [ 00:17:22.848 "00000000-0000-0000-0000-000000000001" 00:17:22.848 ], 00:17:22.848 "product_name": "passthru", 00:17:22.848 "block_size": 512, 00:17:22.848 "num_blocks": 65536, 00:17:22.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.848 "assigned_rate_limits": { 00:17:22.848 "rw_ios_per_sec": 0, 00:17:22.848 "rw_mbytes_per_sec": 0, 00:17:22.848 "r_mbytes_per_sec": 0, 00:17:22.848 "w_mbytes_per_sec": 0 00:17:22.848 }, 00:17:22.848 "claimed": true, 00:17:22.848 "claim_type": "exclusive_write", 00:17:22.848 "zoned": false, 00:17:22.848 "supported_io_types": { 00:17:22.848 "read": true, 00:17:22.848 "write": true, 00:17:22.848 "unmap": true, 00:17:22.848 "flush": true, 00:17:22.848 "reset": true, 00:17:22.848 "nvme_admin": false, 00:17:22.848 "nvme_io": false, 00:17:22.848 "nvme_io_md": false, 00:17:22.848 "write_zeroes": true, 00:17:22.848 "zcopy": true, 00:17:22.848 "get_zone_info": false, 00:17:22.848 "zone_management": false, 00:17:22.848 "zone_append": false, 00:17:22.848 "compare": false, 00:17:22.848 "compare_and_write": false, 00:17:22.848 "abort": true, 00:17:22.848 "seek_hole": false, 00:17:22.848 "seek_data": false, 00:17:22.848 "copy": true, 00:17:22.848 "nvme_iov_md": false 00:17:22.848 }, 00:17:22.848 "memory_domains": [ 00:17:22.848 { 00:17:22.848 "dma_device_id": "system", 00:17:22.848 "dma_device_type": 1 00:17:22.848 }, 00:17:22.848 { 00:17:22.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.849 "dma_device_type": 2 00:17:22.849 } 00:17:22.849 ], 00:17:22.849 "driver_specific": { 00:17:22.849 "passthru": { 00:17:22.849 "name": "pt1", 00:17:22.849 "base_bdev_name": "malloc1" 00:17:22.849 } 00:17:22.849 } 00:17:22.849 }' 00:17:22.849 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.849 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.849 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.849 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:23.130 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:23.388 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:23.388 "name": "pt2", 00:17:23.388 "aliases": [ 00:17:23.388 "00000000-0000-0000-0000-000000000002" 00:17:23.388 ], 00:17:23.388 "product_name": "passthru", 00:17:23.388 "block_size": 512, 00:17:23.388 "num_blocks": 65536, 00:17:23.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.388 "assigned_rate_limits": { 00:17:23.388 "rw_ios_per_sec": 0, 00:17:23.388 "rw_mbytes_per_sec": 0, 00:17:23.388 "r_mbytes_per_sec": 0, 00:17:23.388 "w_mbytes_per_sec": 0 00:17:23.388 }, 00:17:23.388 "claimed": true, 00:17:23.388 "claim_type": "exclusive_write", 00:17:23.388 "zoned": false, 00:17:23.388 "supported_io_types": { 00:17:23.388 "read": true, 00:17:23.388 "write": true, 00:17:23.388 "unmap": true, 00:17:23.388 "flush": true, 00:17:23.388 "reset": true, 00:17:23.388 "nvme_admin": false, 00:17:23.388 "nvme_io": false, 00:17:23.388 "nvme_io_md": false, 00:17:23.388 "write_zeroes": true, 00:17:23.388 "zcopy": true, 00:17:23.388 "get_zone_info": false, 00:17:23.388 "zone_management": false, 00:17:23.388 "zone_append": false, 00:17:23.388 "compare": false, 00:17:23.388 "compare_and_write": false, 00:17:23.388 "abort": true, 00:17:23.388 "seek_hole": false, 00:17:23.388 "seek_data": false, 00:17:23.388 "copy": true, 00:17:23.388 "nvme_iov_md": false 00:17:23.388 }, 00:17:23.388 "memory_domains": [ 00:17:23.388 { 00:17:23.388 "dma_device_id": "system", 00:17:23.388 "dma_device_type": 1 00:17:23.388 }, 00:17:23.388 { 00:17:23.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.388 "dma_device_type": 2 00:17:23.388 } 00:17:23.388 ], 00:17:23.388 "driver_specific": { 00:17:23.388 "passthru": { 00:17:23.388 "name": "pt2", 00:17:23.388 "base_bdev_name": "malloc2" 00:17:23.388 } 00:17:23.388 } 00:17:23.388 }' 00:17:23.388 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.645 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.645 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:23.645 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.645 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.645 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.645 02:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.645 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.645 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.645 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.903 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.903 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.903 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.903 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:23.903 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:24.161 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:24.161 "name": "pt3", 00:17:24.161 "aliases": [ 00:17:24.161 "00000000-0000-0000-0000-000000000003" 00:17:24.161 ], 00:17:24.161 "product_name": "passthru", 00:17:24.161 "block_size": 512, 00:17:24.161 "num_blocks": 65536, 00:17:24.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.161 "assigned_rate_limits": { 00:17:24.161 "rw_ios_per_sec": 0, 00:17:24.161 "rw_mbytes_per_sec": 0, 00:17:24.161 "r_mbytes_per_sec": 0, 00:17:24.161 "w_mbytes_per_sec": 0 00:17:24.161 }, 00:17:24.161 "claimed": true, 00:17:24.161 "claim_type": "exclusive_write", 00:17:24.161 "zoned": false, 00:17:24.161 "supported_io_types": { 00:17:24.161 "read": true, 00:17:24.161 "write": true, 00:17:24.161 "unmap": true, 00:17:24.161 "flush": true, 00:17:24.161 "reset": true, 00:17:24.161 "nvme_admin": false, 00:17:24.161 "nvme_io": false, 00:17:24.161 "nvme_io_md": false, 00:17:24.161 "write_zeroes": true, 00:17:24.161 "zcopy": true, 00:17:24.161 "get_zone_info": false, 00:17:24.161 "zone_management": false, 00:17:24.161 "zone_append": false, 00:17:24.161 "compare": false, 00:17:24.161 "compare_and_write": false, 00:17:24.161 "abort": true, 00:17:24.161 "seek_hole": false, 00:17:24.161 "seek_data": false, 00:17:24.161 "copy": true, 00:17:24.161 "nvme_iov_md": false 00:17:24.161 }, 00:17:24.161 "memory_domains": [ 00:17:24.161 { 00:17:24.161 "dma_device_id": "system", 00:17:24.161 "dma_device_type": 1 00:17:24.161 }, 00:17:24.161 { 00:17:24.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.161 "dma_device_type": 2 00:17:24.161 } 00:17:24.161 ], 00:17:24.161 "driver_specific": { 00:17:24.161 "passthru": { 00:17:24.161 "name": "pt3", 00:17:24.161 "base_bdev_name": "malloc3" 00:17:24.161 } 00:17:24.161 } 00:17:24.161 }' 00:17:24.161 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.161 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.161 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:24.161 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.161 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:24.420 02:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:24.679 [2024-07-11 02:23:15.004656] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.679 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6d242fbf-c068-491f-98fd-72dd92279e41 00:17:24.679 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6d242fbf-c068-491f-98fd-72dd92279e41 ']' 00:17:24.679 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:24.938 [2024-07-11 02:23:15.269088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.938 [2024-07-11 02:23:15.269114] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.938 [2024-07-11 02:23:15.269168] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.938 [2024-07-11 02:23:15.269221] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.938 [2024-07-11 02:23:15.269234] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15362d0 name raid_bdev1, state offline 00:17:24.938 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.938 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:25.505 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:25.505 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:25.505 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:25.505 02:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:26.071 02:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.071 02:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:26.329 02:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.329 02:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:26.587 02:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:26.587 02:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:27.154 02:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:27.154 02:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.154 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:27.155 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.413 [2024-07-11 02:23:17.599150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:27.413 [2024-07-11 02:23:17.600493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:27.413 [2024-07-11 02:23:17.600538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:27.413 [2024-07-11 02:23:17.600584] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:27.413 [2024-07-11 02:23:17.600625] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:27.413 [2024-07-11 02:23:17.600648] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:27.413 [2024-07-11 02:23:17.600666] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.413 [2024-07-11 02:23:17.600676] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x153af10 name raid_bdev1, state configuring 00:17:27.413 request: 00:17:27.413 { 00:17:27.413 "name": "raid_bdev1", 00:17:27.413 "raid_level": "raid0", 00:17:27.413 "base_bdevs": [ 00:17:27.413 "malloc1", 00:17:27.413 "malloc2", 00:17:27.413 "malloc3" 00:17:27.413 ], 00:17:27.413 "strip_size_kb": 64, 00:17:27.413 "superblock": false, 00:17:27.413 "method": "bdev_raid_create", 00:17:27.413 "req_id": 1 00:17:27.413 } 00:17:27.413 Got JSON-RPC error response 00:17:27.413 response: 00:17:27.413 { 00:17:27.413 "code": -17, 00:17:27.413 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:27.413 } 00:17:27.413 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:27.413 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.413 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.413 02:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.413 02:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.413 02:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.981 [2024-07-11 02:23:18.369113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.981 [2024-07-11 02:23:18.369166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.981 [2024-07-11 02:23:18.369185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1532cb0 00:17:27.981 [2024-07-11 02:23:18.369198] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.981 [2024-07-11 02:23:18.370827] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.981 [2024-07-11 02:23:18.370860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.981 [2024-07-11 02:23:18.370932] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:27.981 [2024-07-11 02:23:18.370959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.981 pt1 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.981 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.548 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.548 "name": "raid_bdev1", 00:17:28.548 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:28.548 "strip_size_kb": 64, 00:17:28.548 "state": "configuring", 00:17:28.548 "raid_level": "raid0", 00:17:28.548 "superblock": true, 00:17:28.548 "num_base_bdevs": 3, 00:17:28.548 "num_base_bdevs_discovered": 1, 00:17:28.548 "num_base_bdevs_operational": 3, 00:17:28.548 "base_bdevs_list": [ 00:17:28.548 { 00:17:28.548 "name": "pt1", 00:17:28.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.548 "is_configured": true, 00:17:28.548 "data_offset": 2048, 00:17:28.548 "data_size": 63488 00:17:28.548 }, 00:17:28.548 { 00:17:28.548 "name": null, 00:17:28.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.548 "is_configured": false, 00:17:28.548 "data_offset": 2048, 00:17:28.548 "data_size": 63488 00:17:28.548 }, 00:17:28.548 { 00:17:28.548 "name": null, 00:17:28.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:28.548 "is_configured": false, 00:17:28.548 "data_offset": 2048, 00:17:28.548 "data_size": 63488 00:17:28.548 } 00:17:28.548 ] 00:17:28.548 }' 00:17:28.548 02:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.548 02:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.485 02:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:17:29.485 02:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.744 [2024-07-11 02:23:20.001466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.744 [2024-07-11 02:23:20.001516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.744 [2024-07-11 02:23:20.001534] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1533820 00:17:29.744 [2024-07-11 02:23:20.001546] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.744 [2024-07-11 02:23:20.001897] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.744 [2024-07-11 02:23:20.001917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.744 [2024-07-11 02:23:20.001979] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:29.744 [2024-07-11 02:23:20.002000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.744 pt2 00:17:29.744 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:30.003 [2024-07-11 02:23:20.250168] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.003 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.261 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.261 "name": "raid_bdev1", 00:17:30.261 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:30.261 "strip_size_kb": 64, 00:17:30.261 "state": "configuring", 00:17:30.261 "raid_level": "raid0", 00:17:30.261 "superblock": true, 00:17:30.261 "num_base_bdevs": 3, 00:17:30.261 "num_base_bdevs_discovered": 1, 00:17:30.261 "num_base_bdevs_operational": 3, 00:17:30.261 "base_bdevs_list": [ 00:17:30.261 { 00:17:30.261 "name": "pt1", 00:17:30.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.261 "is_configured": true, 00:17:30.261 "data_offset": 2048, 00:17:30.261 "data_size": 63488 00:17:30.261 }, 00:17:30.261 { 00:17:30.261 "name": null, 00:17:30.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.261 "is_configured": false, 00:17:30.261 "data_offset": 2048, 00:17:30.261 "data_size": 63488 00:17:30.262 }, 00:17:30.262 { 00:17:30.262 "name": null, 00:17:30.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:30.262 "is_configured": false, 00:17:30.262 "data_offset": 2048, 00:17:30.262 "data_size": 63488 00:17:30.262 } 00:17:30.262 ] 00:17:30.262 }' 00:17:30.262 02:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.262 02:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.829 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:30.829 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:30.829 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.087 [2024-07-11 02:23:21.349072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.087 [2024-07-11 02:23:21.349121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.087 [2024-07-11 02:23:21.349140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1533c20 00:17:31.087 [2024-07-11 02:23:21.349152] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.087 [2024-07-11 02:23:21.349495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.087 [2024-07-11 02:23:21.349513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.087 [2024-07-11 02:23:21.349574] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:31.087 [2024-07-11 02:23:21.349595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.087 pt2 00:17:31.087 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:31.087 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:31.087 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.346 [2024-07-11 02:23:21.597716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.346 [2024-07-11 02:23:21.597750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.346 [2024-07-11 02:23:21.597771] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1539790 00:17:31.346 [2024-07-11 02:23:21.597783] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.346 [2024-07-11 02:23:21.598068] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.346 [2024-07-11 02:23:21.598086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.346 [2024-07-11 02:23:21.598137] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:31.346 [2024-07-11 02:23:21.598154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:31.346 [2024-07-11 02:23:21.598258] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1386900 00:17:31.346 [2024-07-11 02:23:21.598275] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:31.346 [2024-07-11 02:23:21.598438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1389810 00:17:31.346 [2024-07-11 02:23:21.598562] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1386900 00:17:31.346 [2024-07-11 02:23:21.598572] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1386900 00:17:31.346 [2024-07-11 02:23:21.598664] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.346 pt3 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.346 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.605 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.605 "name": "raid_bdev1", 00:17:31.605 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:31.605 "strip_size_kb": 64, 00:17:31.605 "state": "online", 00:17:31.605 "raid_level": "raid0", 00:17:31.605 "superblock": true, 00:17:31.605 "num_base_bdevs": 3, 00:17:31.605 "num_base_bdevs_discovered": 3, 00:17:31.605 "num_base_bdevs_operational": 3, 00:17:31.605 "base_bdevs_list": [ 00:17:31.605 { 00:17:31.605 "name": "pt1", 00:17:31.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.605 "is_configured": true, 00:17:31.605 "data_offset": 2048, 00:17:31.605 "data_size": 63488 00:17:31.605 }, 00:17:31.605 { 00:17:31.605 "name": "pt2", 00:17:31.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.605 "is_configured": true, 00:17:31.605 "data_offset": 2048, 00:17:31.605 "data_size": 63488 00:17:31.605 }, 00:17:31.605 { 00:17:31.605 "name": "pt3", 00:17:31.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:31.605 "is_configured": true, 00:17:31.605 "data_offset": 2048, 00:17:31.605 "data_size": 63488 00:17:31.605 } 00:17:31.605 ] 00:17:31.605 }' 00:17:31.605 02:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.605 02:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:32.173 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:32.433 [2024-07-11 02:23:22.745063] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.433 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:32.433 "name": "raid_bdev1", 00:17:32.433 "aliases": [ 00:17:32.433 "6d242fbf-c068-491f-98fd-72dd92279e41" 00:17:32.433 ], 00:17:32.433 "product_name": "Raid Volume", 00:17:32.433 "block_size": 512, 00:17:32.433 "num_blocks": 190464, 00:17:32.433 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:32.433 "assigned_rate_limits": { 00:17:32.433 "rw_ios_per_sec": 0, 00:17:32.433 "rw_mbytes_per_sec": 0, 00:17:32.433 "r_mbytes_per_sec": 0, 00:17:32.433 "w_mbytes_per_sec": 0 00:17:32.433 }, 00:17:32.433 "claimed": false, 00:17:32.433 "zoned": false, 00:17:32.433 "supported_io_types": { 00:17:32.433 "read": true, 00:17:32.433 "write": true, 00:17:32.433 "unmap": true, 00:17:32.433 "flush": true, 00:17:32.433 "reset": true, 00:17:32.433 "nvme_admin": false, 00:17:32.433 "nvme_io": false, 00:17:32.433 "nvme_io_md": false, 00:17:32.433 "write_zeroes": true, 00:17:32.433 "zcopy": false, 00:17:32.433 "get_zone_info": false, 00:17:32.433 "zone_management": false, 00:17:32.433 "zone_append": false, 00:17:32.433 "compare": false, 00:17:32.433 "compare_and_write": false, 00:17:32.433 "abort": false, 00:17:32.433 "seek_hole": false, 00:17:32.433 "seek_data": false, 00:17:32.433 "copy": false, 00:17:32.433 "nvme_iov_md": false 00:17:32.433 }, 00:17:32.433 "memory_domains": [ 00:17:32.433 { 00:17:32.433 "dma_device_id": "system", 00:17:32.433 "dma_device_type": 1 00:17:32.433 }, 00:17:32.433 { 00:17:32.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.433 "dma_device_type": 2 00:17:32.433 }, 00:17:32.433 { 00:17:32.433 "dma_device_id": "system", 00:17:32.433 "dma_device_type": 1 00:17:32.433 }, 00:17:32.433 { 00:17:32.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.433 "dma_device_type": 2 00:17:32.433 }, 00:17:32.433 { 00:17:32.433 "dma_device_id": "system", 00:17:32.433 "dma_device_type": 1 00:17:32.433 }, 00:17:32.433 { 00:17:32.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.433 "dma_device_type": 2 00:17:32.433 } 00:17:32.433 ], 00:17:32.433 "driver_specific": { 00:17:32.433 "raid": { 00:17:32.433 "uuid": "6d242fbf-c068-491f-98fd-72dd92279e41", 00:17:32.433 "strip_size_kb": 64, 00:17:32.433 "state": "online", 00:17:32.433 "raid_level": "raid0", 00:17:32.433 "superblock": true, 00:17:32.433 "num_base_bdevs": 3, 00:17:32.433 "num_base_bdevs_discovered": 3, 00:17:32.433 "num_base_bdevs_operational": 3, 00:17:32.433 "base_bdevs_list": [ 00:17:32.433 { 00:17:32.433 "name": "pt1", 00:17:32.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.433 "is_configured": true, 00:17:32.433 "data_offset": 2048, 00:17:32.433 "data_size": 63488 00:17:32.433 }, 00:17:32.433 { 00:17:32.433 "name": "pt2", 00:17:32.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.433 "is_configured": true, 00:17:32.433 "data_offset": 2048, 00:17:32.433 "data_size": 63488 00:17:32.433 }, 00:17:32.433 { 00:17:32.433 "name": "pt3", 00:17:32.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.433 "is_configured": true, 00:17:32.433 "data_offset": 2048, 00:17:32.433 "data_size": 63488 00:17:32.433 } 00:17:32.433 ] 00:17:32.433 } 00:17:32.433 } 00:17:32.433 }' 00:17:32.433 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.433 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:32.433 pt2 00:17:32.433 pt3' 00:17:32.433 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.433 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:32.433 02:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.692 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.692 "name": "pt1", 00:17:32.692 "aliases": [ 00:17:32.692 "00000000-0000-0000-0000-000000000001" 00:17:32.692 ], 00:17:32.692 "product_name": "passthru", 00:17:32.692 "block_size": 512, 00:17:32.692 "num_blocks": 65536, 00:17:32.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.692 "assigned_rate_limits": { 00:17:32.692 "rw_ios_per_sec": 0, 00:17:32.692 "rw_mbytes_per_sec": 0, 00:17:32.692 "r_mbytes_per_sec": 0, 00:17:32.692 "w_mbytes_per_sec": 0 00:17:32.692 }, 00:17:32.692 "claimed": true, 00:17:32.692 "claim_type": "exclusive_write", 00:17:32.692 "zoned": false, 00:17:32.692 "supported_io_types": { 00:17:32.692 "read": true, 00:17:32.692 "write": true, 00:17:32.692 "unmap": true, 00:17:32.692 "flush": true, 00:17:32.692 "reset": true, 00:17:32.692 "nvme_admin": false, 00:17:32.692 "nvme_io": false, 00:17:32.692 "nvme_io_md": false, 00:17:32.692 "write_zeroes": true, 00:17:32.692 "zcopy": true, 00:17:32.692 "get_zone_info": false, 00:17:32.692 "zone_management": false, 00:17:32.692 "zone_append": false, 00:17:32.692 "compare": false, 00:17:32.692 "compare_and_write": false, 00:17:32.692 "abort": true, 00:17:32.692 "seek_hole": false, 00:17:32.692 "seek_data": false, 00:17:32.692 "copy": true, 00:17:32.692 "nvme_iov_md": false 00:17:32.692 }, 00:17:32.692 "memory_domains": [ 00:17:32.692 { 00:17:32.692 "dma_device_id": "system", 00:17:32.692 "dma_device_type": 1 00:17:32.692 }, 00:17:32.692 { 00:17:32.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.692 "dma_device_type": 2 00:17:32.692 } 00:17:32.692 ], 00:17:32.692 "driver_specific": { 00:17:32.692 "passthru": { 00:17:32.692 "name": "pt1", 00:17:32.692 "base_bdev_name": "malloc1" 00:17:32.692 } 00:17:32.692 } 00:17:32.692 }' 00:17:32.692 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.693 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.952 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.211 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:33.211 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:33.211 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:33.211 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:33.470 "name": "pt2", 00:17:33.470 "aliases": [ 00:17:33.470 "00000000-0000-0000-0000-000000000002" 00:17:33.470 ], 00:17:33.470 "product_name": "passthru", 00:17:33.470 "block_size": 512, 00:17:33.470 "num_blocks": 65536, 00:17:33.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.470 "assigned_rate_limits": { 00:17:33.470 "rw_ios_per_sec": 0, 00:17:33.470 "rw_mbytes_per_sec": 0, 00:17:33.470 "r_mbytes_per_sec": 0, 00:17:33.470 "w_mbytes_per_sec": 0 00:17:33.470 }, 00:17:33.470 "claimed": true, 00:17:33.470 "claim_type": "exclusive_write", 00:17:33.470 "zoned": false, 00:17:33.470 "supported_io_types": { 00:17:33.470 "read": true, 00:17:33.470 "write": true, 00:17:33.470 "unmap": true, 00:17:33.470 "flush": true, 00:17:33.470 "reset": true, 00:17:33.470 "nvme_admin": false, 00:17:33.470 "nvme_io": false, 00:17:33.470 "nvme_io_md": false, 00:17:33.470 "write_zeroes": true, 00:17:33.470 "zcopy": true, 00:17:33.470 "get_zone_info": false, 00:17:33.470 "zone_management": false, 00:17:33.470 "zone_append": false, 00:17:33.470 "compare": false, 00:17:33.470 "compare_and_write": false, 00:17:33.470 "abort": true, 00:17:33.470 "seek_hole": false, 00:17:33.470 "seek_data": false, 00:17:33.470 "copy": true, 00:17:33.470 "nvme_iov_md": false 00:17:33.470 }, 00:17:33.470 "memory_domains": [ 00:17:33.470 { 00:17:33.470 "dma_device_id": "system", 00:17:33.470 "dma_device_type": 1 00:17:33.470 }, 00:17:33.470 { 00:17:33.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.470 "dma_device_type": 2 00:17:33.470 } 00:17:33.470 ], 00:17:33.470 "driver_specific": { 00:17:33.470 "passthru": { 00:17:33.470 "name": "pt2", 00:17:33.470 "base_bdev_name": "malloc2" 00:17:33.470 } 00:17:33.470 } 00:17:33.470 }' 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:33.470 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.729 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.729 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:33.729 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:33.729 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:33.729 02:23:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:33.988 "name": "pt3", 00:17:33.988 "aliases": [ 00:17:33.988 "00000000-0000-0000-0000-000000000003" 00:17:33.988 ], 00:17:33.988 "product_name": "passthru", 00:17:33.988 "block_size": 512, 00:17:33.988 "num_blocks": 65536, 00:17:33.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.988 "assigned_rate_limits": { 00:17:33.988 "rw_ios_per_sec": 0, 00:17:33.988 "rw_mbytes_per_sec": 0, 00:17:33.988 "r_mbytes_per_sec": 0, 00:17:33.988 "w_mbytes_per_sec": 0 00:17:33.988 }, 00:17:33.988 "claimed": true, 00:17:33.988 "claim_type": "exclusive_write", 00:17:33.988 "zoned": false, 00:17:33.988 "supported_io_types": { 00:17:33.988 "read": true, 00:17:33.988 "write": true, 00:17:33.988 "unmap": true, 00:17:33.988 "flush": true, 00:17:33.988 "reset": true, 00:17:33.988 "nvme_admin": false, 00:17:33.988 "nvme_io": false, 00:17:33.988 "nvme_io_md": false, 00:17:33.988 "write_zeroes": true, 00:17:33.988 "zcopy": true, 00:17:33.988 "get_zone_info": false, 00:17:33.988 "zone_management": false, 00:17:33.988 "zone_append": false, 00:17:33.988 "compare": false, 00:17:33.988 "compare_and_write": false, 00:17:33.988 "abort": true, 00:17:33.988 "seek_hole": false, 00:17:33.988 "seek_data": false, 00:17:33.988 "copy": true, 00:17:33.988 "nvme_iov_md": false 00:17:33.988 }, 00:17:33.988 "memory_domains": [ 00:17:33.988 { 00:17:33.988 "dma_device_id": "system", 00:17:33.988 "dma_device_type": 1 00:17:33.988 }, 00:17:33.988 { 00:17:33.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.988 "dma_device_type": 2 00:17:33.988 } 00:17:33.988 ], 00:17:33.988 "driver_specific": { 00:17:33.988 "passthru": { 00:17:33.988 "name": "pt3", 00:17:33.988 "base_bdev_name": "malloc3" 00:17:33.988 } 00:17:33.988 } 00:17:33.988 }' 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:33.988 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:34.248 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:34.248 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:34.248 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:34.248 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:34.248 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:34.248 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:34.248 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:34.507 [2024-07-11 02:23:24.802534] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6d242fbf-c068-491f-98fd-72dd92279e41 '!=' 6d242fbf-c068-491f-98fd-72dd92279e41 ']' 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1922221 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1922221 ']' 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1922221 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1922221 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1922221' 00:17:34.507 killing process with pid 1922221 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1922221 00:17:34.507 [2024-07-11 02:23:24.869835] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.507 [2024-07-11 02:23:24.869891] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.507 [2024-07-11 02:23:24.869944] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.507 [2024-07-11 02:23:24.869956] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1386900 name raid_bdev1, state offline 00:17:34.507 02:23:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1922221 00:17:34.507 [2024-07-11 02:23:24.895315] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.767 02:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:34.767 00:17:34.767 real 0m14.973s 00:17:34.767 user 0m27.387s 00:17:34.767 sys 0m2.765s 00:17:34.767 02:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:34.767 02:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.767 ************************************ 00:17:34.767 END TEST raid_superblock_test 00:17:34.767 ************************************ 00:17:34.767 02:23:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:34.767 02:23:25 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:17:34.767 02:23:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:34.767 02:23:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:34.767 02:23:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.767 ************************************ 00:17:34.767 START TEST raid_read_error_test 00:17:34.767 ************************************ 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:34.767 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.b9zlaKhBwc 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1924631 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1924631 /var/tmp/spdk-raid.sock 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1924631 ']' 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.026 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.026 [2024-07-11 02:23:25.245142] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:17:35.026 [2024-07-11 02:23:25.245195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924631 ] 00:17:35.026 [2024-07-11 02:23:25.367605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.026 [2024-07-11 02:23:25.419693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.285 [2024-07-11 02:23:25.479909] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.285 [2024-07-11 02:23:25.479951] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.285 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.285 02:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:35.285 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:35.285 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:35.544 BaseBdev1_malloc 00:17:35.544 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:35.544 true 00:17:35.544 02:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:35.803 [2024-07-11 02:23:26.123971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:35.803 [2024-07-11 02:23:26.124018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.803 [2024-07-11 02:23:26.124036] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2959330 00:17:35.803 [2024-07-11 02:23:26.124049] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.803 [2024-07-11 02:23:26.125663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.803 [2024-07-11 02:23:26.125692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:35.803 BaseBdev1 00:17:35.803 02:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:35.803 02:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:36.062 BaseBdev2_malloc 00:17:36.062 02:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:36.062 true 00:17:36.343 02:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:36.343 [2024-07-11 02:23:26.665786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:36.343 [2024-07-11 02:23:26.665827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.343 [2024-07-11 02:23:26.665845] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2952b40 00:17:36.343 [2024-07-11 02:23:26.665858] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.343 [2024-07-11 02:23:26.667243] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.343 [2024-07-11 02:23:26.667272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:36.343 BaseBdev2 00:17:36.343 02:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:36.343 02:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:36.680 BaseBdev3_malloc 00:17:36.680 02:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:36.680 true 00:17:36.680 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:36.950 [2024-07-11 02:23:27.199719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:36.950 [2024-07-11 02:23:27.199768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.950 [2024-07-11 02:23:27.199788] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29560f0 00:17:36.950 [2024-07-11 02:23:27.199800] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.950 [2024-07-11 02:23:27.201131] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.951 [2024-07-11 02:23:27.201159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:36.951 BaseBdev3 00:17:36.951 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:36.951 [2024-07-11 02:23:27.364184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.951 [2024-07-11 02:23:27.365321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.951 [2024-07-11 02:23:27.365385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.951 [2024-07-11 02:23:27.365576] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x27a7870 00:17:36.951 [2024-07-11 02:23:27.365588] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:36.951 [2024-07-11 02:23:27.365754] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27a54f0 00:17:36.951 [2024-07-11 02:23:27.365900] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27a7870 00:17:36.951 [2024-07-11 02:23:27.365910] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27a7870 00:17:36.951 [2024-07-11 02:23:27.366005] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.209 "name": "raid_bdev1", 00:17:37.209 "uuid": "e38594f5-3319-44b7-82a5-aa05dcded816", 00:17:37.209 "strip_size_kb": 64, 00:17:37.209 "state": "online", 00:17:37.209 "raid_level": "raid0", 00:17:37.209 "superblock": true, 00:17:37.209 "num_base_bdevs": 3, 00:17:37.209 "num_base_bdevs_discovered": 3, 00:17:37.209 "num_base_bdevs_operational": 3, 00:17:37.209 "base_bdevs_list": [ 00:17:37.209 { 00:17:37.209 "name": "BaseBdev1", 00:17:37.209 "uuid": "ee2e72fe-b832-5d9d-a1e0-8dfc454777fc", 00:17:37.209 "is_configured": true, 00:17:37.209 "data_offset": 2048, 00:17:37.209 "data_size": 63488 00:17:37.209 }, 00:17:37.209 { 00:17:37.209 "name": "BaseBdev2", 00:17:37.209 "uuid": "e7894ea5-8ec7-5141-a2b2-1de0eba4a5e2", 00:17:37.209 "is_configured": true, 00:17:37.209 "data_offset": 2048, 00:17:37.209 "data_size": 63488 00:17:37.209 }, 00:17:37.209 { 00:17:37.209 "name": "BaseBdev3", 00:17:37.209 "uuid": "58cc8b43-0f62-553b-aac9-07fe3309ff3a", 00:17:37.209 "is_configured": true, 00:17:37.209 "data_offset": 2048, 00:17:37.209 "data_size": 63488 00:17:37.209 } 00:17:37.209 ] 00:17:37.209 }' 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.209 02:23:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.774 02:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:37.774 02:23:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:38.032 [2024-07-11 02:23:28.270855] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27a75a0 00:17:38.979 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.236 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.237 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.237 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.494 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.494 "name": "raid_bdev1", 00:17:39.494 "uuid": "e38594f5-3319-44b7-82a5-aa05dcded816", 00:17:39.494 "strip_size_kb": 64, 00:17:39.494 "state": "online", 00:17:39.494 "raid_level": "raid0", 00:17:39.494 "superblock": true, 00:17:39.494 "num_base_bdevs": 3, 00:17:39.494 "num_base_bdevs_discovered": 3, 00:17:39.494 "num_base_bdevs_operational": 3, 00:17:39.494 "base_bdevs_list": [ 00:17:39.494 { 00:17:39.494 "name": "BaseBdev1", 00:17:39.494 "uuid": "ee2e72fe-b832-5d9d-a1e0-8dfc454777fc", 00:17:39.494 "is_configured": true, 00:17:39.494 "data_offset": 2048, 00:17:39.494 "data_size": 63488 00:17:39.494 }, 00:17:39.494 { 00:17:39.494 "name": "BaseBdev2", 00:17:39.494 "uuid": "e7894ea5-8ec7-5141-a2b2-1de0eba4a5e2", 00:17:39.494 "is_configured": true, 00:17:39.494 "data_offset": 2048, 00:17:39.494 "data_size": 63488 00:17:39.494 }, 00:17:39.494 { 00:17:39.494 "name": "BaseBdev3", 00:17:39.494 "uuid": "58cc8b43-0f62-553b-aac9-07fe3309ff3a", 00:17:39.494 "is_configured": true, 00:17:39.494 "data_offset": 2048, 00:17:39.494 "data_size": 63488 00:17:39.494 } 00:17:39.494 ] 00:17:39.494 }' 00:17:39.494 02:23:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.494 02:23:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.061 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:40.320 [2024-07-11 02:23:30.512524] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.320 [2024-07-11 02:23:30.512561] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.320 [2024-07-11 02:23:30.515722] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.320 [2024-07-11 02:23:30.515765] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.320 [2024-07-11 02:23:30.515798] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.320 [2024-07-11 02:23:30.515809] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27a7870 name raid_bdev1, state offline 00:17:40.320 0 00:17:40.320 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1924631 00:17:40.320 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1924631 ']' 00:17:40.320 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1924631 00:17:40.320 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:40.321 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.321 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1924631 00:17:40.321 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:40.321 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:40.321 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1924631' 00:17:40.321 killing process with pid 1924631 00:17:40.321 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1924631 00:17:40.321 [2024-07-11 02:23:30.586061] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.321 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1924631 00:17:40.321 [2024-07-11 02:23:30.606681] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.b9zlaKhBwc 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:17:40.581 00:17:40.581 real 0m5.656s 00:17:40.581 user 0m8.985s 00:17:40.581 sys 0m1.086s 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.581 02:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.581 ************************************ 00:17:40.581 END TEST raid_read_error_test 00:17:40.581 ************************************ 00:17:40.581 02:23:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:40.581 02:23:30 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:17:40.581 02:23:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:40.581 02:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.581 02:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.581 ************************************ 00:17:40.581 START TEST raid_write_error_test 00:17:40.581 ************************************ 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8REBWZsdPX 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1925742 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1925742 /var/tmp/spdk-raid.sock 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1925742 ']' 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.581 02:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.581 [2024-07-11 02:23:30.992755] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:17:40.581 [2024-07-11 02:23:30.992829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925742 ] 00:17:40.840 [2024-07-11 02:23:31.113207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.840 [2024-07-11 02:23:31.165379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.840 [2024-07-11 02:23:31.225607] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.840 [2024-07-11 02:23:31.225644] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.099 02:23:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.099 02:23:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:41.099 02:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:41.099 02:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:41.099 BaseBdev1_malloc 00:17:41.099 02:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:41.359 true 00:17:41.359 02:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:41.359 [2024-07-11 02:23:31.772059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:41.359 [2024-07-11 02:23:31.772100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.359 [2024-07-11 02:23:31.772119] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x152f330 00:17:41.359 [2024-07-11 02:23:31.772131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.359 [2024-07-11 02:23:31.773784] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.359 [2024-07-11 02:23:31.773812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.359 BaseBdev1 00:17:41.617 02:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:41.617 02:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:41.617 BaseBdev2_malloc 00:17:41.617 02:23:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:41.876 true 00:17:41.876 02:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:42.134 [2024-07-11 02:23:32.382137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:42.134 [2024-07-11 02:23:32.382180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.134 [2024-07-11 02:23:32.382198] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1528b40 00:17:42.134 [2024-07-11 02:23:32.382216] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.134 [2024-07-11 02:23:32.383614] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.134 [2024-07-11 02:23:32.383642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:42.134 BaseBdev2 00:17:42.134 02:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:42.134 02:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:42.393 BaseBdev3_malloc 00:17:42.393 02:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:42.393 true 00:17:42.393 02:23:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:42.652 [2024-07-11 02:23:32.988197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:42.652 [2024-07-11 02:23:32.988238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.652 [2024-07-11 02:23:32.988257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x152c0f0 00:17:42.653 [2024-07-11 02:23:32.988270] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.653 [2024-07-11 02:23:32.989621] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.653 [2024-07-11 02:23:32.989648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:42.653 BaseBdev3 00:17:42.653 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:42.912 [2024-07-11 02:23:33.164691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.912 [2024-07-11 02:23:33.165853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.912 [2024-07-11 02:23:33.165920] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.912 [2024-07-11 02:23:33.166113] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x137d870 00:17:42.912 [2024-07-11 02:23:33.166125] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:42.912 [2024-07-11 02:23:33.166295] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x137b4f0 00:17:42.912 [2024-07-11 02:23:33.166432] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x137d870 00:17:42.912 [2024-07-11 02:23:33.166442] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x137d870 00:17:42.912 [2024-07-11 02:23:33.166537] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.913 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.172 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.172 "name": "raid_bdev1", 00:17:43.172 "uuid": "df07d084-a076-472d-a0fd-e9cf7ffaba58", 00:17:43.172 "strip_size_kb": 64, 00:17:43.172 "state": "online", 00:17:43.172 "raid_level": "raid0", 00:17:43.172 "superblock": true, 00:17:43.172 "num_base_bdevs": 3, 00:17:43.172 "num_base_bdevs_discovered": 3, 00:17:43.172 "num_base_bdevs_operational": 3, 00:17:43.172 "base_bdevs_list": [ 00:17:43.172 { 00:17:43.172 "name": "BaseBdev1", 00:17:43.172 "uuid": "d64e9e75-446c-5142-acc1-c2cc55814f72", 00:17:43.172 "is_configured": true, 00:17:43.172 "data_offset": 2048, 00:17:43.172 "data_size": 63488 00:17:43.172 }, 00:17:43.172 { 00:17:43.172 "name": "BaseBdev2", 00:17:43.172 "uuid": "2a72711b-6762-547b-9f25-9c15bded49a6", 00:17:43.172 "is_configured": true, 00:17:43.172 "data_offset": 2048, 00:17:43.172 "data_size": 63488 00:17:43.172 }, 00:17:43.172 { 00:17:43.172 "name": "BaseBdev3", 00:17:43.172 "uuid": "c4b4521e-30e8-57dc-a0f4-f02b0fd31342", 00:17:43.172 "is_configured": true, 00:17:43.172 "data_offset": 2048, 00:17:43.172 "data_size": 63488 00:17:43.172 } 00:17:43.172 ] 00:17:43.172 }' 00:17:43.172 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.172 02:23:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.740 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:43.740 02:23:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:43.740 [2024-07-11 02:23:34.087407] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x137d5a0 00:17:44.678 02:23:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.937 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.197 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.197 "name": "raid_bdev1", 00:17:45.197 "uuid": "df07d084-a076-472d-a0fd-e9cf7ffaba58", 00:17:45.197 "strip_size_kb": 64, 00:17:45.197 "state": "online", 00:17:45.197 "raid_level": "raid0", 00:17:45.197 "superblock": true, 00:17:45.197 "num_base_bdevs": 3, 00:17:45.197 "num_base_bdevs_discovered": 3, 00:17:45.197 "num_base_bdevs_operational": 3, 00:17:45.197 "base_bdevs_list": [ 00:17:45.197 { 00:17:45.197 "name": "BaseBdev1", 00:17:45.197 "uuid": "d64e9e75-446c-5142-acc1-c2cc55814f72", 00:17:45.197 "is_configured": true, 00:17:45.197 "data_offset": 2048, 00:17:45.197 "data_size": 63488 00:17:45.197 }, 00:17:45.197 { 00:17:45.197 "name": "BaseBdev2", 00:17:45.197 "uuid": "2a72711b-6762-547b-9f25-9c15bded49a6", 00:17:45.197 "is_configured": true, 00:17:45.197 "data_offset": 2048, 00:17:45.197 "data_size": 63488 00:17:45.197 }, 00:17:45.197 { 00:17:45.197 "name": "BaseBdev3", 00:17:45.197 "uuid": "c4b4521e-30e8-57dc-a0f4-f02b0fd31342", 00:17:45.197 "is_configured": true, 00:17:45.197 "data_offset": 2048, 00:17:45.197 "data_size": 63488 00:17:45.197 } 00:17:45.197 ] 00:17:45.197 }' 00:17:45.197 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.197 02:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.765 02:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:46.024 [2024-07-11 02:23:36.191617] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.024 [2024-07-11 02:23:36.191652] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.024 [2024-07-11 02:23:36.194826] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.024 [2024-07-11 02:23:36.194860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.024 [2024-07-11 02:23:36.194892] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.024 [2024-07-11 02:23:36.194903] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x137d870 name raid_bdev1, state offline 00:17:46.024 0 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1925742 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1925742 ']' 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1925742 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1925742 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1925742' 00:17:46.024 killing process with pid 1925742 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1925742 00:17:46.024 [2024-07-11 02:23:36.277620] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.024 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1925742 00:17:46.024 [2024-07-11 02:23:36.299230] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8REBWZsdPX 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:17:46.284 00:17:46.284 real 0m5.603s 00:17:46.284 user 0m8.859s 00:17:46.284 sys 0m1.135s 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.284 02:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.284 ************************************ 00:17:46.284 END TEST raid_write_error_test 00:17:46.284 ************************************ 00:17:46.284 02:23:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:46.284 02:23:36 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:46.284 02:23:36 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:46.284 02:23:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:46.284 02:23:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.284 02:23:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.284 ************************************ 00:17:46.284 START TEST raid_state_function_test 00:17:46.284 ************************************ 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1926542 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1926542' 00:17:46.284 Process raid pid: 1926542 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1926542 /var/tmp/spdk-raid.sock 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1926542 ']' 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:46.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.284 02:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.284 [2024-07-11 02:23:36.677735] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:17:46.284 [2024-07-11 02:23:36.677806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.567 [2024-07-11 02:23:36.817628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.567 [2024-07-11 02:23:36.870718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.567 [2024-07-11 02:23:36.939162] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.567 [2024-07-11 02:23:36.939214] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:47.505 [2024-07-11 02:23:37.839575] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.505 [2024-07-11 02:23:37.839618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.505 [2024-07-11 02:23:37.839629] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.505 [2024-07-11 02:23:37.839641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.505 [2024-07-11 02:23:37.839650] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.505 [2024-07-11 02:23:37.839661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.505 02:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.765 02:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.765 "name": "Existed_Raid", 00:17:47.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.765 "strip_size_kb": 64, 00:17:47.765 "state": "configuring", 00:17:47.765 "raid_level": "concat", 00:17:47.765 "superblock": false, 00:17:47.765 "num_base_bdevs": 3, 00:17:47.765 "num_base_bdevs_discovered": 0, 00:17:47.765 "num_base_bdevs_operational": 3, 00:17:47.765 "base_bdevs_list": [ 00:17:47.765 { 00:17:47.765 "name": "BaseBdev1", 00:17:47.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.765 "is_configured": false, 00:17:47.765 "data_offset": 0, 00:17:47.765 "data_size": 0 00:17:47.765 }, 00:17:47.765 { 00:17:47.765 "name": "BaseBdev2", 00:17:47.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.765 "is_configured": false, 00:17:47.765 "data_offset": 0, 00:17:47.765 "data_size": 0 00:17:47.765 }, 00:17:47.765 { 00:17:47.765 "name": "BaseBdev3", 00:17:47.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.765 "is_configured": false, 00:17:47.765 "data_offset": 0, 00:17:47.765 "data_size": 0 00:17:47.765 } 00:17:47.765 ] 00:17:47.765 }' 00:17:47.765 02:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.765 02:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.333 02:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:48.591 [2024-07-11 02:23:38.809993] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.591 [2024-07-11 02:23:38.810024] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10db5a0 name Existed_Raid, state configuring 00:17:48.591 02:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:48.851 [2024-07-11 02:23:39.058671] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.851 [2024-07-11 02:23:39.058697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.851 [2024-07-11 02:23:39.058707] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.851 [2024-07-11 02:23:39.058718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.851 [2024-07-11 02:23:39.058727] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:48.851 [2024-07-11 02:23:39.058738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:48.851 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.110 [2024-07-11 02:23:39.317064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.110 BaseBdev1 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.110 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.368 [ 00:17:49.368 { 00:17:49.368 "name": "BaseBdev1", 00:17:49.368 "aliases": [ 00:17:49.368 "d79aefeb-f340-4577-858b-f2ff7fb2f761" 00:17:49.368 ], 00:17:49.368 "product_name": "Malloc disk", 00:17:49.368 "block_size": 512, 00:17:49.368 "num_blocks": 65536, 00:17:49.368 "uuid": "d79aefeb-f340-4577-858b-f2ff7fb2f761", 00:17:49.368 "assigned_rate_limits": { 00:17:49.368 "rw_ios_per_sec": 0, 00:17:49.368 "rw_mbytes_per_sec": 0, 00:17:49.368 "r_mbytes_per_sec": 0, 00:17:49.368 "w_mbytes_per_sec": 0 00:17:49.368 }, 00:17:49.368 "claimed": true, 00:17:49.368 "claim_type": "exclusive_write", 00:17:49.368 "zoned": false, 00:17:49.368 "supported_io_types": { 00:17:49.368 "read": true, 00:17:49.368 "write": true, 00:17:49.369 "unmap": true, 00:17:49.369 "flush": true, 00:17:49.369 "reset": true, 00:17:49.369 "nvme_admin": false, 00:17:49.369 "nvme_io": false, 00:17:49.369 "nvme_io_md": false, 00:17:49.369 "write_zeroes": true, 00:17:49.369 "zcopy": true, 00:17:49.369 "get_zone_info": false, 00:17:49.369 "zone_management": false, 00:17:49.369 "zone_append": false, 00:17:49.369 "compare": false, 00:17:49.369 "compare_and_write": false, 00:17:49.369 "abort": true, 00:17:49.369 "seek_hole": false, 00:17:49.369 "seek_data": false, 00:17:49.369 "copy": true, 00:17:49.369 "nvme_iov_md": false 00:17:49.369 }, 00:17:49.369 "memory_domains": [ 00:17:49.369 { 00:17:49.369 "dma_device_id": "system", 00:17:49.369 "dma_device_type": 1 00:17:49.369 }, 00:17:49.369 { 00:17:49.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.369 "dma_device_type": 2 00:17:49.369 } 00:17:49.369 ], 00:17:49.369 "driver_specific": {} 00:17:49.369 } 00:17:49.369 ] 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.369 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.628 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.628 "name": "Existed_Raid", 00:17:49.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.628 "strip_size_kb": 64, 00:17:49.628 "state": "configuring", 00:17:49.628 "raid_level": "concat", 00:17:49.628 "superblock": false, 00:17:49.628 "num_base_bdevs": 3, 00:17:49.628 "num_base_bdevs_discovered": 1, 00:17:49.628 "num_base_bdevs_operational": 3, 00:17:49.628 "base_bdevs_list": [ 00:17:49.628 { 00:17:49.628 "name": "BaseBdev1", 00:17:49.628 "uuid": "d79aefeb-f340-4577-858b-f2ff7fb2f761", 00:17:49.628 "is_configured": true, 00:17:49.628 "data_offset": 0, 00:17:49.628 "data_size": 65536 00:17:49.628 }, 00:17:49.628 { 00:17:49.628 "name": "BaseBdev2", 00:17:49.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.628 "is_configured": false, 00:17:49.628 "data_offset": 0, 00:17:49.628 "data_size": 0 00:17:49.628 }, 00:17:49.628 { 00:17:49.628 "name": "BaseBdev3", 00:17:49.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.628 "is_configured": false, 00:17:49.628 "data_offset": 0, 00:17:49.628 "data_size": 0 00:17:49.628 } 00:17:49.628 ] 00:17:49.628 }' 00:17:49.628 02:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.628 02:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.195 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:50.454 [2024-07-11 02:23:40.688701] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.454 [2024-07-11 02:23:40.688739] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10daed0 name Existed_Raid, state configuring 00:17:50.454 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:50.713 [2024-07-11 02:23:40.949424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.713 [2024-07-11 02:23:40.950831] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.713 [2024-07-11 02:23:40.950864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.713 [2024-07-11 02:23:40.950874] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.713 [2024-07-11 02:23:40.950886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.713 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.714 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.714 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.714 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.714 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.714 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.714 02:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.972 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.972 "name": "Existed_Raid", 00:17:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.972 "strip_size_kb": 64, 00:17:50.972 "state": "configuring", 00:17:50.972 "raid_level": "concat", 00:17:50.972 "superblock": false, 00:17:50.972 "num_base_bdevs": 3, 00:17:50.972 "num_base_bdevs_discovered": 1, 00:17:50.972 "num_base_bdevs_operational": 3, 00:17:50.972 "base_bdevs_list": [ 00:17:50.972 { 00:17:50.972 "name": "BaseBdev1", 00:17:50.972 "uuid": "d79aefeb-f340-4577-858b-f2ff7fb2f761", 00:17:50.972 "is_configured": true, 00:17:50.972 "data_offset": 0, 00:17:50.972 "data_size": 65536 00:17:50.972 }, 00:17:50.972 { 00:17:50.972 "name": "BaseBdev2", 00:17:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.972 "is_configured": false, 00:17:50.972 "data_offset": 0, 00:17:50.972 "data_size": 0 00:17:50.972 }, 00:17:50.972 { 00:17:50.972 "name": "BaseBdev3", 00:17:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.972 "is_configured": false, 00:17:50.972 "data_offset": 0, 00:17:50.972 "data_size": 0 00:17:50.972 } 00:17:50.973 ] 00:17:50.973 }' 00:17:50.973 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.973 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.540 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:51.798 [2024-07-11 02:23:42.071754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.798 BaseBdev2 00:17:51.798 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:51.798 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:51.798 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.798 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:51.798 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.798 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.799 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.057 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:52.316 [ 00:17:52.316 { 00:17:52.316 "name": "BaseBdev2", 00:17:52.316 "aliases": [ 00:17:52.316 "cf6c9c30-feb3-475c-b5a6-c58b38454347" 00:17:52.316 ], 00:17:52.316 "product_name": "Malloc disk", 00:17:52.316 "block_size": 512, 00:17:52.316 "num_blocks": 65536, 00:17:52.316 "uuid": "cf6c9c30-feb3-475c-b5a6-c58b38454347", 00:17:52.316 "assigned_rate_limits": { 00:17:52.316 "rw_ios_per_sec": 0, 00:17:52.316 "rw_mbytes_per_sec": 0, 00:17:52.316 "r_mbytes_per_sec": 0, 00:17:52.316 "w_mbytes_per_sec": 0 00:17:52.316 }, 00:17:52.316 "claimed": true, 00:17:52.316 "claim_type": "exclusive_write", 00:17:52.316 "zoned": false, 00:17:52.316 "supported_io_types": { 00:17:52.316 "read": true, 00:17:52.316 "write": true, 00:17:52.316 "unmap": true, 00:17:52.316 "flush": true, 00:17:52.316 "reset": true, 00:17:52.316 "nvme_admin": false, 00:17:52.316 "nvme_io": false, 00:17:52.316 "nvme_io_md": false, 00:17:52.316 "write_zeroes": true, 00:17:52.316 "zcopy": true, 00:17:52.316 "get_zone_info": false, 00:17:52.316 "zone_management": false, 00:17:52.316 "zone_append": false, 00:17:52.316 "compare": false, 00:17:52.316 "compare_and_write": false, 00:17:52.316 "abort": true, 00:17:52.316 "seek_hole": false, 00:17:52.316 "seek_data": false, 00:17:52.316 "copy": true, 00:17:52.316 "nvme_iov_md": false 00:17:52.316 }, 00:17:52.316 "memory_domains": [ 00:17:52.316 { 00:17:52.316 "dma_device_id": "system", 00:17:52.316 "dma_device_type": 1 00:17:52.316 }, 00:17:52.316 { 00:17:52.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.316 "dma_device_type": 2 00:17:52.316 } 00:17:52.316 ], 00:17:52.316 "driver_specific": {} 00:17:52.316 } 00:17:52.316 ] 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.316 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.575 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.575 "name": "Existed_Raid", 00:17:52.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.575 "strip_size_kb": 64, 00:17:52.575 "state": "configuring", 00:17:52.575 "raid_level": "concat", 00:17:52.575 "superblock": false, 00:17:52.575 "num_base_bdevs": 3, 00:17:52.575 "num_base_bdevs_discovered": 2, 00:17:52.575 "num_base_bdevs_operational": 3, 00:17:52.575 "base_bdevs_list": [ 00:17:52.575 { 00:17:52.575 "name": "BaseBdev1", 00:17:52.575 "uuid": "d79aefeb-f340-4577-858b-f2ff7fb2f761", 00:17:52.575 "is_configured": true, 00:17:52.575 "data_offset": 0, 00:17:52.575 "data_size": 65536 00:17:52.575 }, 00:17:52.575 { 00:17:52.575 "name": "BaseBdev2", 00:17:52.575 "uuid": "cf6c9c30-feb3-475c-b5a6-c58b38454347", 00:17:52.575 "is_configured": true, 00:17:52.575 "data_offset": 0, 00:17:52.575 "data_size": 65536 00:17:52.575 }, 00:17:52.575 { 00:17:52.575 "name": "BaseBdev3", 00:17:52.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.575 "is_configured": false, 00:17:52.575 "data_offset": 0, 00:17:52.575 "data_size": 0 00:17:52.575 } 00:17:52.575 ] 00:17:52.575 }' 00:17:52.575 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.575 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:53.401 [2024-07-11 02:23:43.639220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.401 [2024-07-11 02:23:43.639255] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x128dcf0 00:17:53.401 [2024-07-11 02:23:43.639269] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:53.401 [2024-07-11 02:23:43.639514] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e2250 00:17:53.401 [2024-07-11 02:23:43.639629] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x128dcf0 00:17:53.401 [2024-07-11 02:23:43.639638] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x128dcf0 00:17:53.401 [2024-07-11 02:23:43.639804] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.401 BaseBdev3 00:17:53.401 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:53.401 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:53.401 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:53.401 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:53.401 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:53.401 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:53.401 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.658 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:53.917 [ 00:17:53.917 { 00:17:53.917 "name": "BaseBdev3", 00:17:53.917 "aliases": [ 00:17:53.917 "6d2bc72e-aac2-4972-b3df-a0b3c4f6a2f7" 00:17:53.917 ], 00:17:53.917 "product_name": "Malloc disk", 00:17:53.917 "block_size": 512, 00:17:53.917 "num_blocks": 65536, 00:17:53.917 "uuid": "6d2bc72e-aac2-4972-b3df-a0b3c4f6a2f7", 00:17:53.917 "assigned_rate_limits": { 00:17:53.917 "rw_ios_per_sec": 0, 00:17:53.917 "rw_mbytes_per_sec": 0, 00:17:53.917 "r_mbytes_per_sec": 0, 00:17:53.917 "w_mbytes_per_sec": 0 00:17:53.917 }, 00:17:53.917 "claimed": true, 00:17:53.917 "claim_type": "exclusive_write", 00:17:53.917 "zoned": false, 00:17:53.917 "supported_io_types": { 00:17:53.917 "read": true, 00:17:53.917 "write": true, 00:17:53.917 "unmap": true, 00:17:53.917 "flush": true, 00:17:53.917 "reset": true, 00:17:53.917 "nvme_admin": false, 00:17:53.917 "nvme_io": false, 00:17:53.917 "nvme_io_md": false, 00:17:53.917 "write_zeroes": true, 00:17:53.917 "zcopy": true, 00:17:53.917 "get_zone_info": false, 00:17:53.917 "zone_management": false, 00:17:53.917 "zone_append": false, 00:17:53.917 "compare": false, 00:17:53.917 "compare_and_write": false, 00:17:53.917 "abort": true, 00:17:53.917 "seek_hole": false, 00:17:53.917 "seek_data": false, 00:17:53.917 "copy": true, 00:17:53.917 "nvme_iov_md": false 00:17:53.917 }, 00:17:53.917 "memory_domains": [ 00:17:53.917 { 00:17:53.917 "dma_device_id": "system", 00:17:53.917 "dma_device_type": 1 00:17:53.917 }, 00:17:53.917 { 00:17:53.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.917 "dma_device_type": 2 00:17:53.917 } 00:17:53.917 ], 00:17:53.917 "driver_specific": {} 00:17:53.917 } 00:17:53.917 ] 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.917 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.176 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.176 "name": "Existed_Raid", 00:17:54.176 "uuid": "178cf7f3-1a0c-46bd-8187-7250f9d2d823", 00:17:54.176 "strip_size_kb": 64, 00:17:54.176 "state": "online", 00:17:54.176 "raid_level": "concat", 00:17:54.176 "superblock": false, 00:17:54.176 "num_base_bdevs": 3, 00:17:54.176 "num_base_bdevs_discovered": 3, 00:17:54.176 "num_base_bdevs_operational": 3, 00:17:54.176 "base_bdevs_list": [ 00:17:54.176 { 00:17:54.176 "name": "BaseBdev1", 00:17:54.176 "uuid": "d79aefeb-f340-4577-858b-f2ff7fb2f761", 00:17:54.176 "is_configured": true, 00:17:54.176 "data_offset": 0, 00:17:54.176 "data_size": 65536 00:17:54.176 }, 00:17:54.176 { 00:17:54.176 "name": "BaseBdev2", 00:17:54.176 "uuid": "cf6c9c30-feb3-475c-b5a6-c58b38454347", 00:17:54.176 "is_configured": true, 00:17:54.176 "data_offset": 0, 00:17:54.176 "data_size": 65536 00:17:54.176 }, 00:17:54.176 { 00:17:54.176 "name": "BaseBdev3", 00:17:54.176 "uuid": "6d2bc72e-aac2-4972-b3df-a0b3c4f6a2f7", 00:17:54.176 "is_configured": true, 00:17:54.176 "data_offset": 0, 00:17:54.176 "data_size": 65536 00:17:54.176 } 00:17:54.176 ] 00:17:54.176 }' 00:17:54.176 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.176 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:54.745 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:55.004 [2024-07-11 02:23:45.195654] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.004 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:55.004 "name": "Existed_Raid", 00:17:55.004 "aliases": [ 00:17:55.004 "178cf7f3-1a0c-46bd-8187-7250f9d2d823" 00:17:55.004 ], 00:17:55.004 "product_name": "Raid Volume", 00:17:55.004 "block_size": 512, 00:17:55.004 "num_blocks": 196608, 00:17:55.004 "uuid": "178cf7f3-1a0c-46bd-8187-7250f9d2d823", 00:17:55.004 "assigned_rate_limits": { 00:17:55.004 "rw_ios_per_sec": 0, 00:17:55.004 "rw_mbytes_per_sec": 0, 00:17:55.004 "r_mbytes_per_sec": 0, 00:17:55.004 "w_mbytes_per_sec": 0 00:17:55.004 }, 00:17:55.004 "claimed": false, 00:17:55.004 "zoned": false, 00:17:55.004 "supported_io_types": { 00:17:55.004 "read": true, 00:17:55.004 "write": true, 00:17:55.004 "unmap": true, 00:17:55.004 "flush": true, 00:17:55.004 "reset": true, 00:17:55.004 "nvme_admin": false, 00:17:55.004 "nvme_io": false, 00:17:55.004 "nvme_io_md": false, 00:17:55.004 "write_zeroes": true, 00:17:55.004 "zcopy": false, 00:17:55.004 "get_zone_info": false, 00:17:55.004 "zone_management": false, 00:17:55.004 "zone_append": false, 00:17:55.004 "compare": false, 00:17:55.004 "compare_and_write": false, 00:17:55.004 "abort": false, 00:17:55.004 "seek_hole": false, 00:17:55.004 "seek_data": false, 00:17:55.004 "copy": false, 00:17:55.004 "nvme_iov_md": false 00:17:55.004 }, 00:17:55.004 "memory_domains": [ 00:17:55.004 { 00:17:55.004 "dma_device_id": "system", 00:17:55.004 "dma_device_type": 1 00:17:55.004 }, 00:17:55.004 { 00:17:55.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.004 "dma_device_type": 2 00:17:55.004 }, 00:17:55.004 { 00:17:55.004 "dma_device_id": "system", 00:17:55.004 "dma_device_type": 1 00:17:55.004 }, 00:17:55.004 { 00:17:55.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.004 "dma_device_type": 2 00:17:55.004 }, 00:17:55.004 { 00:17:55.004 "dma_device_id": "system", 00:17:55.004 "dma_device_type": 1 00:17:55.004 }, 00:17:55.004 { 00:17:55.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.004 "dma_device_type": 2 00:17:55.004 } 00:17:55.004 ], 00:17:55.004 "driver_specific": { 00:17:55.004 "raid": { 00:17:55.004 "uuid": "178cf7f3-1a0c-46bd-8187-7250f9d2d823", 00:17:55.004 "strip_size_kb": 64, 00:17:55.004 "state": "online", 00:17:55.004 "raid_level": "concat", 00:17:55.004 "superblock": false, 00:17:55.004 "num_base_bdevs": 3, 00:17:55.004 "num_base_bdevs_discovered": 3, 00:17:55.005 "num_base_bdevs_operational": 3, 00:17:55.005 "base_bdevs_list": [ 00:17:55.005 { 00:17:55.005 "name": "BaseBdev1", 00:17:55.005 "uuid": "d79aefeb-f340-4577-858b-f2ff7fb2f761", 00:17:55.005 "is_configured": true, 00:17:55.005 "data_offset": 0, 00:17:55.005 "data_size": 65536 00:17:55.005 }, 00:17:55.005 { 00:17:55.005 "name": "BaseBdev2", 00:17:55.005 "uuid": "cf6c9c30-feb3-475c-b5a6-c58b38454347", 00:17:55.005 "is_configured": true, 00:17:55.005 "data_offset": 0, 00:17:55.005 "data_size": 65536 00:17:55.005 }, 00:17:55.005 { 00:17:55.005 "name": "BaseBdev3", 00:17:55.005 "uuid": "6d2bc72e-aac2-4972-b3df-a0b3c4f6a2f7", 00:17:55.005 "is_configured": true, 00:17:55.005 "data_offset": 0, 00:17:55.005 "data_size": 65536 00:17:55.005 } 00:17:55.005 ] 00:17:55.005 } 00:17:55.005 } 00:17:55.005 }' 00:17:55.005 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.005 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:55.005 BaseBdev2 00:17:55.005 BaseBdev3' 00:17:55.005 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:55.005 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:55.005 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:55.264 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:55.264 "name": "BaseBdev1", 00:17:55.264 "aliases": [ 00:17:55.264 "d79aefeb-f340-4577-858b-f2ff7fb2f761" 00:17:55.264 ], 00:17:55.264 "product_name": "Malloc disk", 00:17:55.264 "block_size": 512, 00:17:55.264 "num_blocks": 65536, 00:17:55.264 "uuid": "d79aefeb-f340-4577-858b-f2ff7fb2f761", 00:17:55.264 "assigned_rate_limits": { 00:17:55.264 "rw_ios_per_sec": 0, 00:17:55.264 "rw_mbytes_per_sec": 0, 00:17:55.264 "r_mbytes_per_sec": 0, 00:17:55.264 "w_mbytes_per_sec": 0 00:17:55.264 }, 00:17:55.264 "claimed": true, 00:17:55.264 "claim_type": "exclusive_write", 00:17:55.264 "zoned": false, 00:17:55.264 "supported_io_types": { 00:17:55.264 "read": true, 00:17:55.264 "write": true, 00:17:55.264 "unmap": true, 00:17:55.264 "flush": true, 00:17:55.264 "reset": true, 00:17:55.264 "nvme_admin": false, 00:17:55.264 "nvme_io": false, 00:17:55.264 "nvme_io_md": false, 00:17:55.264 "write_zeroes": true, 00:17:55.264 "zcopy": true, 00:17:55.264 "get_zone_info": false, 00:17:55.264 "zone_management": false, 00:17:55.264 "zone_append": false, 00:17:55.264 "compare": false, 00:17:55.264 "compare_and_write": false, 00:17:55.264 "abort": true, 00:17:55.264 "seek_hole": false, 00:17:55.264 "seek_data": false, 00:17:55.264 "copy": true, 00:17:55.264 "nvme_iov_md": false 00:17:55.264 }, 00:17:55.264 "memory_domains": [ 00:17:55.264 { 00:17:55.264 "dma_device_id": "system", 00:17:55.264 "dma_device_type": 1 00:17:55.264 }, 00:17:55.264 { 00:17:55.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.264 "dma_device_type": 2 00:17:55.264 } 00:17:55.264 ], 00:17:55.264 "driver_specific": {} 00:17:55.264 }' 00:17:55.264 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.264 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.264 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:55.264 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.522 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.522 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:55.522 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.522 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.522 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.522 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.781 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.781 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.781 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:55.781 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:55.781 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:56.348 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:56.348 "name": "BaseBdev2", 00:17:56.348 "aliases": [ 00:17:56.348 "cf6c9c30-feb3-475c-b5a6-c58b38454347" 00:17:56.348 ], 00:17:56.348 "product_name": "Malloc disk", 00:17:56.348 "block_size": 512, 00:17:56.348 "num_blocks": 65536, 00:17:56.348 "uuid": "cf6c9c30-feb3-475c-b5a6-c58b38454347", 00:17:56.348 "assigned_rate_limits": { 00:17:56.348 "rw_ios_per_sec": 0, 00:17:56.348 "rw_mbytes_per_sec": 0, 00:17:56.348 "r_mbytes_per_sec": 0, 00:17:56.348 "w_mbytes_per_sec": 0 00:17:56.348 }, 00:17:56.348 "claimed": true, 00:17:56.348 "claim_type": "exclusive_write", 00:17:56.348 "zoned": false, 00:17:56.348 "supported_io_types": { 00:17:56.348 "read": true, 00:17:56.348 "write": true, 00:17:56.348 "unmap": true, 00:17:56.348 "flush": true, 00:17:56.348 "reset": true, 00:17:56.348 "nvme_admin": false, 00:17:56.348 "nvme_io": false, 00:17:56.348 "nvme_io_md": false, 00:17:56.348 "write_zeroes": true, 00:17:56.348 "zcopy": true, 00:17:56.348 "get_zone_info": false, 00:17:56.348 "zone_management": false, 00:17:56.348 "zone_append": false, 00:17:56.349 "compare": false, 00:17:56.349 "compare_and_write": false, 00:17:56.349 "abort": true, 00:17:56.349 "seek_hole": false, 00:17:56.349 "seek_data": false, 00:17:56.349 "copy": true, 00:17:56.349 "nvme_iov_md": false 00:17:56.349 }, 00:17:56.349 "memory_domains": [ 00:17:56.349 { 00:17:56.349 "dma_device_id": "system", 00:17:56.349 "dma_device_type": 1 00:17:56.349 }, 00:17:56.349 { 00:17:56.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.349 "dma_device_type": 2 00:17:56.349 } 00:17:56.349 ], 00:17:56.349 "driver_specific": {} 00:17:56.349 }' 00:17:56.349 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.349 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.349 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:56.349 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.349 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.349 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:56.349 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:56.607 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:56.866 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:56.866 "name": "BaseBdev3", 00:17:56.866 "aliases": [ 00:17:56.866 "6d2bc72e-aac2-4972-b3df-a0b3c4f6a2f7" 00:17:56.866 ], 00:17:56.866 "product_name": "Malloc disk", 00:17:56.866 "block_size": 512, 00:17:56.866 "num_blocks": 65536, 00:17:56.866 "uuid": "6d2bc72e-aac2-4972-b3df-a0b3c4f6a2f7", 00:17:56.866 "assigned_rate_limits": { 00:17:56.866 "rw_ios_per_sec": 0, 00:17:56.866 "rw_mbytes_per_sec": 0, 00:17:56.866 "r_mbytes_per_sec": 0, 00:17:56.866 "w_mbytes_per_sec": 0 00:17:56.866 }, 00:17:56.866 "claimed": true, 00:17:56.866 "claim_type": "exclusive_write", 00:17:56.866 "zoned": false, 00:17:56.866 "supported_io_types": { 00:17:56.867 "read": true, 00:17:56.867 "write": true, 00:17:56.867 "unmap": true, 00:17:56.867 "flush": true, 00:17:56.867 "reset": true, 00:17:56.867 "nvme_admin": false, 00:17:56.867 "nvme_io": false, 00:17:56.867 "nvme_io_md": false, 00:17:56.867 "write_zeroes": true, 00:17:56.867 "zcopy": true, 00:17:56.867 "get_zone_info": false, 00:17:56.867 "zone_management": false, 00:17:56.867 "zone_append": false, 00:17:56.867 "compare": false, 00:17:56.867 "compare_and_write": false, 00:17:56.867 "abort": true, 00:17:56.867 "seek_hole": false, 00:17:56.867 "seek_data": false, 00:17:56.867 "copy": true, 00:17:56.867 "nvme_iov_md": false 00:17:56.867 }, 00:17:56.867 "memory_domains": [ 00:17:56.867 { 00:17:56.867 "dma_device_id": "system", 00:17:56.867 "dma_device_type": 1 00:17:56.867 }, 00:17:56.867 { 00:17:56.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.867 "dma_device_type": 2 00:17:56.867 } 00:17:56.867 ], 00:17:56.867 "driver_specific": {} 00:17:56.867 }' 00:17:56.867 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.125 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.125 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:57.125 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.125 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.125 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.125 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.125 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.384 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.384 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.384 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.384 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:57.384 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:57.643 [2024-07-11 02:23:47.918643] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.643 [2024-07-11 02:23:47.918670] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.643 [2024-07-11 02:23:47.918711] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.643 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.644 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.902 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.902 "name": "Existed_Raid", 00:17:57.902 "uuid": "178cf7f3-1a0c-46bd-8187-7250f9d2d823", 00:17:57.902 "strip_size_kb": 64, 00:17:57.902 "state": "offline", 00:17:57.902 "raid_level": "concat", 00:17:57.902 "superblock": false, 00:17:57.902 "num_base_bdevs": 3, 00:17:57.902 "num_base_bdevs_discovered": 2, 00:17:57.902 "num_base_bdevs_operational": 2, 00:17:57.902 "base_bdevs_list": [ 00:17:57.902 { 00:17:57.902 "name": null, 00:17:57.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.902 "is_configured": false, 00:17:57.902 "data_offset": 0, 00:17:57.902 "data_size": 65536 00:17:57.902 }, 00:17:57.902 { 00:17:57.902 "name": "BaseBdev2", 00:17:57.902 "uuid": "cf6c9c30-feb3-475c-b5a6-c58b38454347", 00:17:57.902 "is_configured": true, 00:17:57.902 "data_offset": 0, 00:17:57.902 "data_size": 65536 00:17:57.902 }, 00:17:57.902 { 00:17:57.902 "name": "BaseBdev3", 00:17:57.902 "uuid": "6d2bc72e-aac2-4972-b3df-a0b3c4f6a2f7", 00:17:57.902 "is_configured": true, 00:17:57.902 "data_offset": 0, 00:17:57.902 "data_size": 65536 00:17:57.902 } 00:17:57.902 ] 00:17:57.902 }' 00:17:57.902 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.902 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.933 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:58.933 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:58.934 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.934 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:59.192 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:59.192 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.192 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:59.192 [2024-07-11 02:23:49.592899] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.450 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:59.450 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:59.450 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.450 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:59.709 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:59.709 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.709 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:59.967 [2024-07-11 02:23:50.150522] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:59.967 [2024-07-11 02:23:50.150564] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128dcf0 name Existed_Raid, state offline 00:17:59.967 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:59.967 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:59.967 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.967 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:00.226 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:00.226 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:00.226 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:00.226 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:00.226 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:00.226 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:00.485 BaseBdev2 00:18:00.485 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:00.485 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:00.485 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:00.485 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:00.485 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:00.485 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:00.485 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.744 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:00.744 [ 00:18:00.744 { 00:18:00.744 "name": "BaseBdev2", 00:18:00.744 "aliases": [ 00:18:00.744 "0fd2f3ed-3e66-4b51-aebd-160e318f384a" 00:18:00.744 ], 00:18:00.744 "product_name": "Malloc disk", 00:18:00.744 "block_size": 512, 00:18:00.744 "num_blocks": 65536, 00:18:00.744 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:00.744 "assigned_rate_limits": { 00:18:00.744 "rw_ios_per_sec": 0, 00:18:00.744 "rw_mbytes_per_sec": 0, 00:18:00.744 "r_mbytes_per_sec": 0, 00:18:00.744 "w_mbytes_per_sec": 0 00:18:00.744 }, 00:18:00.744 "claimed": false, 00:18:00.744 "zoned": false, 00:18:00.744 "supported_io_types": { 00:18:00.744 "read": true, 00:18:00.744 "write": true, 00:18:00.744 "unmap": true, 00:18:00.744 "flush": true, 00:18:00.744 "reset": true, 00:18:00.744 "nvme_admin": false, 00:18:00.744 "nvme_io": false, 00:18:00.744 "nvme_io_md": false, 00:18:00.744 "write_zeroes": true, 00:18:00.744 "zcopy": true, 00:18:00.744 "get_zone_info": false, 00:18:00.744 "zone_management": false, 00:18:00.744 "zone_append": false, 00:18:00.744 "compare": false, 00:18:00.744 "compare_and_write": false, 00:18:00.744 "abort": true, 00:18:00.744 "seek_hole": false, 00:18:00.744 "seek_data": false, 00:18:00.744 "copy": true, 00:18:00.744 "nvme_iov_md": false 00:18:00.744 }, 00:18:00.744 "memory_domains": [ 00:18:00.744 { 00:18:00.744 "dma_device_id": "system", 00:18:00.744 "dma_device_type": 1 00:18:00.744 }, 00:18:00.744 { 00:18:00.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.744 "dma_device_type": 2 00:18:00.744 } 00:18:00.744 ], 00:18:00.744 "driver_specific": {} 00:18:00.744 } 00:18:00.744 ] 00:18:00.744 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:00.744 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:00.744 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:00.744 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:01.003 BaseBdev3 00:18:01.003 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:01.003 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:01.003 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:01.003 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:01.003 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:01.003 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:01.003 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:01.262 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:01.521 [ 00:18:01.521 { 00:18:01.521 "name": "BaseBdev3", 00:18:01.521 "aliases": [ 00:18:01.521 "31e4ee89-70ec-48f4-b62f-ceab737c0d94" 00:18:01.521 ], 00:18:01.521 "product_name": "Malloc disk", 00:18:01.521 "block_size": 512, 00:18:01.521 "num_blocks": 65536, 00:18:01.521 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:01.521 "assigned_rate_limits": { 00:18:01.521 "rw_ios_per_sec": 0, 00:18:01.521 "rw_mbytes_per_sec": 0, 00:18:01.521 "r_mbytes_per_sec": 0, 00:18:01.521 "w_mbytes_per_sec": 0 00:18:01.521 }, 00:18:01.521 "claimed": false, 00:18:01.521 "zoned": false, 00:18:01.521 "supported_io_types": { 00:18:01.521 "read": true, 00:18:01.521 "write": true, 00:18:01.521 "unmap": true, 00:18:01.521 "flush": true, 00:18:01.521 "reset": true, 00:18:01.521 "nvme_admin": false, 00:18:01.521 "nvme_io": false, 00:18:01.521 "nvme_io_md": false, 00:18:01.521 "write_zeroes": true, 00:18:01.521 "zcopy": true, 00:18:01.521 "get_zone_info": false, 00:18:01.521 "zone_management": false, 00:18:01.521 "zone_append": false, 00:18:01.521 "compare": false, 00:18:01.521 "compare_and_write": false, 00:18:01.521 "abort": true, 00:18:01.521 "seek_hole": false, 00:18:01.521 "seek_data": false, 00:18:01.521 "copy": true, 00:18:01.521 "nvme_iov_md": false 00:18:01.521 }, 00:18:01.521 "memory_domains": [ 00:18:01.521 { 00:18:01.521 "dma_device_id": "system", 00:18:01.521 "dma_device_type": 1 00:18:01.521 }, 00:18:01.521 { 00:18:01.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.521 "dma_device_type": 2 00:18:01.521 } 00:18:01.521 ], 00:18:01.521 "driver_specific": {} 00:18:01.521 } 00:18:01.521 ] 00:18:01.521 02:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:01.521 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:01.521 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:01.521 02:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:01.779 [2024-07-11 02:23:52.119900] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.779 [2024-07-11 02:23:52.119949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.779 [2024-07-11 02:23:52.119968] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.779 [2024-07-11 02:23:52.121264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.779 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.038 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.038 "name": "Existed_Raid", 00:18:02.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.038 "strip_size_kb": 64, 00:18:02.038 "state": "configuring", 00:18:02.038 "raid_level": "concat", 00:18:02.038 "superblock": false, 00:18:02.038 "num_base_bdevs": 3, 00:18:02.038 "num_base_bdevs_discovered": 2, 00:18:02.038 "num_base_bdevs_operational": 3, 00:18:02.038 "base_bdevs_list": [ 00:18:02.038 { 00:18:02.038 "name": "BaseBdev1", 00:18:02.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.038 "is_configured": false, 00:18:02.038 "data_offset": 0, 00:18:02.038 "data_size": 0 00:18:02.038 }, 00:18:02.038 { 00:18:02.038 "name": "BaseBdev2", 00:18:02.038 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:02.038 "is_configured": true, 00:18:02.038 "data_offset": 0, 00:18:02.038 "data_size": 65536 00:18:02.038 }, 00:18:02.038 { 00:18:02.038 "name": "BaseBdev3", 00:18:02.038 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:02.038 "is_configured": true, 00:18:02.038 "data_offset": 0, 00:18:02.038 "data_size": 65536 00:18:02.038 } 00:18:02.038 ] 00:18:02.038 }' 00:18:02.038 02:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.038 02:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.976 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:03.235 [2024-07-11 02:23:53.479497] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.235 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.495 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.495 "name": "Existed_Raid", 00:18:03.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.495 "strip_size_kb": 64, 00:18:03.495 "state": "configuring", 00:18:03.495 "raid_level": "concat", 00:18:03.495 "superblock": false, 00:18:03.495 "num_base_bdevs": 3, 00:18:03.495 "num_base_bdevs_discovered": 1, 00:18:03.495 "num_base_bdevs_operational": 3, 00:18:03.495 "base_bdevs_list": [ 00:18:03.495 { 00:18:03.495 "name": "BaseBdev1", 00:18:03.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.495 "is_configured": false, 00:18:03.495 "data_offset": 0, 00:18:03.495 "data_size": 0 00:18:03.495 }, 00:18:03.495 { 00:18:03.495 "name": null, 00:18:03.495 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:03.495 "is_configured": false, 00:18:03.495 "data_offset": 0, 00:18:03.495 "data_size": 65536 00:18:03.495 }, 00:18:03.495 { 00:18:03.495 "name": "BaseBdev3", 00:18:03.495 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:03.495 "is_configured": true, 00:18:03.495 "data_offset": 0, 00:18:03.495 "data_size": 65536 00:18:03.495 } 00:18:03.495 ] 00:18:03.495 }' 00:18:03.495 02:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.495 02:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.064 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.064 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:04.324 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:04.324 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:04.583 [2024-07-11 02:23:54.835489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.583 BaseBdev1 00:18:04.583 02:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:04.583 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:04.583 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:04.583 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:04.583 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:04.583 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:04.583 02:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.842 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.102 [ 00:18:05.102 { 00:18:05.102 "name": "BaseBdev1", 00:18:05.102 "aliases": [ 00:18:05.102 "064af4de-9ee5-4114-a0d9-a4037c925467" 00:18:05.102 ], 00:18:05.102 "product_name": "Malloc disk", 00:18:05.102 "block_size": 512, 00:18:05.102 "num_blocks": 65536, 00:18:05.103 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:05.103 "assigned_rate_limits": { 00:18:05.103 "rw_ios_per_sec": 0, 00:18:05.103 "rw_mbytes_per_sec": 0, 00:18:05.103 "r_mbytes_per_sec": 0, 00:18:05.103 "w_mbytes_per_sec": 0 00:18:05.103 }, 00:18:05.103 "claimed": true, 00:18:05.103 "claim_type": "exclusive_write", 00:18:05.103 "zoned": false, 00:18:05.103 "supported_io_types": { 00:18:05.103 "read": true, 00:18:05.103 "write": true, 00:18:05.103 "unmap": true, 00:18:05.103 "flush": true, 00:18:05.103 "reset": true, 00:18:05.103 "nvme_admin": false, 00:18:05.103 "nvme_io": false, 00:18:05.103 "nvme_io_md": false, 00:18:05.103 "write_zeroes": true, 00:18:05.103 "zcopy": true, 00:18:05.103 "get_zone_info": false, 00:18:05.103 "zone_management": false, 00:18:05.103 "zone_append": false, 00:18:05.103 "compare": false, 00:18:05.103 "compare_and_write": false, 00:18:05.103 "abort": true, 00:18:05.103 "seek_hole": false, 00:18:05.103 "seek_data": false, 00:18:05.103 "copy": true, 00:18:05.103 "nvme_iov_md": false 00:18:05.103 }, 00:18:05.103 "memory_domains": [ 00:18:05.103 { 00:18:05.103 "dma_device_id": "system", 00:18:05.103 "dma_device_type": 1 00:18:05.103 }, 00:18:05.103 { 00:18:05.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.103 "dma_device_type": 2 00:18:05.103 } 00:18:05.103 ], 00:18:05.103 "driver_specific": {} 00:18:05.103 } 00:18:05.103 ] 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.103 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.362 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.362 "name": "Existed_Raid", 00:18:05.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.362 "strip_size_kb": 64, 00:18:05.362 "state": "configuring", 00:18:05.362 "raid_level": "concat", 00:18:05.362 "superblock": false, 00:18:05.362 "num_base_bdevs": 3, 00:18:05.362 "num_base_bdevs_discovered": 2, 00:18:05.362 "num_base_bdevs_operational": 3, 00:18:05.362 "base_bdevs_list": [ 00:18:05.362 { 00:18:05.362 "name": "BaseBdev1", 00:18:05.362 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:05.362 "is_configured": true, 00:18:05.362 "data_offset": 0, 00:18:05.362 "data_size": 65536 00:18:05.362 }, 00:18:05.362 { 00:18:05.362 "name": null, 00:18:05.362 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:05.362 "is_configured": false, 00:18:05.362 "data_offset": 0, 00:18:05.362 "data_size": 65536 00:18:05.362 }, 00:18:05.362 { 00:18:05.362 "name": "BaseBdev3", 00:18:05.362 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:05.362 "is_configured": true, 00:18:05.362 "data_offset": 0, 00:18:05.362 "data_size": 65536 00:18:05.362 } 00:18:05.362 ] 00:18:05.362 }' 00:18:05.362 02:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.362 02:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.929 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:05.929 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.188 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:06.188 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:06.448 [2024-07-11 02:23:56.656328] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.448 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.707 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.707 "name": "Existed_Raid", 00:18:06.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.707 "strip_size_kb": 64, 00:18:06.707 "state": "configuring", 00:18:06.707 "raid_level": "concat", 00:18:06.707 "superblock": false, 00:18:06.707 "num_base_bdevs": 3, 00:18:06.707 "num_base_bdevs_discovered": 1, 00:18:06.707 "num_base_bdevs_operational": 3, 00:18:06.707 "base_bdevs_list": [ 00:18:06.707 { 00:18:06.707 "name": "BaseBdev1", 00:18:06.707 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:06.707 "is_configured": true, 00:18:06.707 "data_offset": 0, 00:18:06.707 "data_size": 65536 00:18:06.707 }, 00:18:06.707 { 00:18:06.707 "name": null, 00:18:06.707 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:06.707 "is_configured": false, 00:18:06.707 "data_offset": 0, 00:18:06.707 "data_size": 65536 00:18:06.707 }, 00:18:06.707 { 00:18:06.707 "name": null, 00:18:06.707 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:06.707 "is_configured": false, 00:18:06.707 "data_offset": 0, 00:18:06.707 "data_size": 65536 00:18:06.707 } 00:18:06.707 ] 00:18:06.707 }' 00:18:06.707 02:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.707 02:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.274 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.274 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:07.533 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:07.533 02:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:07.792 [2024-07-11 02:23:58.036001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.792 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.050 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.050 "name": "Existed_Raid", 00:18:08.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.050 "strip_size_kb": 64, 00:18:08.050 "state": "configuring", 00:18:08.050 "raid_level": "concat", 00:18:08.050 "superblock": false, 00:18:08.050 "num_base_bdevs": 3, 00:18:08.050 "num_base_bdevs_discovered": 2, 00:18:08.050 "num_base_bdevs_operational": 3, 00:18:08.050 "base_bdevs_list": [ 00:18:08.050 { 00:18:08.050 "name": "BaseBdev1", 00:18:08.050 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:08.050 "is_configured": true, 00:18:08.050 "data_offset": 0, 00:18:08.050 "data_size": 65536 00:18:08.050 }, 00:18:08.050 { 00:18:08.050 "name": null, 00:18:08.050 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:08.050 "is_configured": false, 00:18:08.050 "data_offset": 0, 00:18:08.050 "data_size": 65536 00:18:08.050 }, 00:18:08.050 { 00:18:08.050 "name": "BaseBdev3", 00:18:08.050 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:08.050 "is_configured": true, 00:18:08.050 "data_offset": 0, 00:18:08.050 "data_size": 65536 00:18:08.050 } 00:18:08.050 ] 00:18:08.050 }' 00:18:08.050 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.050 02:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.616 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:08.616 02:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.874 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:08.874 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:09.132 [2024-07-11 02:23:59.379608] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.132 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.390 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.390 "name": "Existed_Raid", 00:18:09.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.390 "strip_size_kb": 64, 00:18:09.390 "state": "configuring", 00:18:09.390 "raid_level": "concat", 00:18:09.390 "superblock": false, 00:18:09.390 "num_base_bdevs": 3, 00:18:09.390 "num_base_bdevs_discovered": 1, 00:18:09.390 "num_base_bdevs_operational": 3, 00:18:09.390 "base_bdevs_list": [ 00:18:09.390 { 00:18:09.390 "name": null, 00:18:09.390 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:09.390 "is_configured": false, 00:18:09.390 "data_offset": 0, 00:18:09.390 "data_size": 65536 00:18:09.390 }, 00:18:09.390 { 00:18:09.390 "name": null, 00:18:09.390 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:09.390 "is_configured": false, 00:18:09.390 "data_offset": 0, 00:18:09.390 "data_size": 65536 00:18:09.390 }, 00:18:09.390 { 00:18:09.390 "name": "BaseBdev3", 00:18:09.390 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:09.390 "is_configured": true, 00:18:09.390 "data_offset": 0, 00:18:09.390 "data_size": 65536 00:18:09.390 } 00:18:09.390 ] 00:18:09.390 }' 00:18:09.390 02:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.390 02:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.958 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.958 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:10.524 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:10.524 02:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:10.782 [2024-07-11 02:24:01.048156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.782 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.041 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.041 "name": "Existed_Raid", 00:18:11.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.041 "strip_size_kb": 64, 00:18:11.041 "state": "configuring", 00:18:11.041 "raid_level": "concat", 00:18:11.041 "superblock": false, 00:18:11.041 "num_base_bdevs": 3, 00:18:11.041 "num_base_bdevs_discovered": 2, 00:18:11.041 "num_base_bdevs_operational": 3, 00:18:11.041 "base_bdevs_list": [ 00:18:11.041 { 00:18:11.041 "name": null, 00:18:11.041 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:11.041 "is_configured": false, 00:18:11.041 "data_offset": 0, 00:18:11.041 "data_size": 65536 00:18:11.041 }, 00:18:11.041 { 00:18:11.041 "name": "BaseBdev2", 00:18:11.041 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:11.041 "is_configured": true, 00:18:11.041 "data_offset": 0, 00:18:11.041 "data_size": 65536 00:18:11.041 }, 00:18:11.041 { 00:18:11.041 "name": "BaseBdev3", 00:18:11.041 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:11.041 "is_configured": true, 00:18:11.041 "data_offset": 0, 00:18:11.041 "data_size": 65536 00:18:11.041 } 00:18:11.041 ] 00:18:11.041 }' 00:18:11.041 02:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.041 02:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.002 02:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.002 02:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:12.260 02:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:12.260 02:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.260 02:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:12.518 02:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 064af4de-9ee5-4114-a0d9-a4037c925467 00:18:12.777 [2024-07-11 02:24:02.968689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:12.777 [2024-07-11 02:24:02.968726] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10ddcf0 00:18:12.777 [2024-07-11 02:24:02.968734] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:12.777 [2024-07-11 02:24:02.968929] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e31e0 00:18:12.777 [2024-07-11 02:24:02.969040] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10ddcf0 00:18:12.777 [2024-07-11 02:24:02.969049] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10ddcf0 00:18:12.777 [2024-07-11 02:24:02.969204] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.777 NewBaseBdev 00:18:12.777 02:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:12.777 02:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:12.777 02:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:12.777 02:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:12.777 02:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:12.777 02:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:12.777 02:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.344 02:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:13.603 [ 00:18:13.603 { 00:18:13.603 "name": "NewBaseBdev", 00:18:13.603 "aliases": [ 00:18:13.603 "064af4de-9ee5-4114-a0d9-a4037c925467" 00:18:13.603 ], 00:18:13.603 "product_name": "Malloc disk", 00:18:13.603 "block_size": 512, 00:18:13.603 "num_blocks": 65536, 00:18:13.603 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:13.603 "assigned_rate_limits": { 00:18:13.603 "rw_ios_per_sec": 0, 00:18:13.603 "rw_mbytes_per_sec": 0, 00:18:13.603 "r_mbytes_per_sec": 0, 00:18:13.603 "w_mbytes_per_sec": 0 00:18:13.603 }, 00:18:13.603 "claimed": true, 00:18:13.603 "claim_type": "exclusive_write", 00:18:13.603 "zoned": false, 00:18:13.603 "supported_io_types": { 00:18:13.603 "read": true, 00:18:13.603 "write": true, 00:18:13.603 "unmap": true, 00:18:13.603 "flush": true, 00:18:13.603 "reset": true, 00:18:13.603 "nvme_admin": false, 00:18:13.603 "nvme_io": false, 00:18:13.603 "nvme_io_md": false, 00:18:13.603 "write_zeroes": true, 00:18:13.603 "zcopy": true, 00:18:13.603 "get_zone_info": false, 00:18:13.603 "zone_management": false, 00:18:13.603 "zone_append": false, 00:18:13.603 "compare": false, 00:18:13.603 "compare_and_write": false, 00:18:13.603 "abort": true, 00:18:13.603 "seek_hole": false, 00:18:13.603 "seek_data": false, 00:18:13.603 "copy": true, 00:18:13.603 "nvme_iov_md": false 00:18:13.603 }, 00:18:13.603 "memory_domains": [ 00:18:13.603 { 00:18:13.603 "dma_device_id": "system", 00:18:13.603 "dma_device_type": 1 00:18:13.603 }, 00:18:13.603 { 00:18:13.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.603 "dma_device_type": 2 00:18:13.603 } 00:18:13.603 ], 00:18:13.603 "driver_specific": {} 00:18:13.603 } 00:18:13.603 ] 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.603 02:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.861 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.861 "name": "Existed_Raid", 00:18:13.861 "uuid": "fd94ef7b-3849-45b3-a501-f9a3412da598", 00:18:13.861 "strip_size_kb": 64, 00:18:13.861 "state": "online", 00:18:13.861 "raid_level": "concat", 00:18:13.861 "superblock": false, 00:18:13.861 "num_base_bdevs": 3, 00:18:13.861 "num_base_bdevs_discovered": 3, 00:18:13.861 "num_base_bdevs_operational": 3, 00:18:13.861 "base_bdevs_list": [ 00:18:13.861 { 00:18:13.861 "name": "NewBaseBdev", 00:18:13.861 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:13.861 "is_configured": true, 00:18:13.861 "data_offset": 0, 00:18:13.861 "data_size": 65536 00:18:13.861 }, 00:18:13.861 { 00:18:13.861 "name": "BaseBdev2", 00:18:13.861 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:13.861 "is_configured": true, 00:18:13.861 "data_offset": 0, 00:18:13.861 "data_size": 65536 00:18:13.861 }, 00:18:13.861 { 00:18:13.861 "name": "BaseBdev3", 00:18:13.861 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:13.861 "is_configured": true, 00:18:13.861 "data_offset": 0, 00:18:13.861 "data_size": 65536 00:18:13.861 } 00:18:13.861 ] 00:18:13.861 }' 00:18:13.861 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.861 02:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.424 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.425 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:14.425 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:14.425 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:14.425 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:14.425 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:14.425 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:14.425 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:14.683 [2024-07-11 02:24:04.890139] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.683 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:14.683 "name": "Existed_Raid", 00:18:14.683 "aliases": [ 00:18:14.683 "fd94ef7b-3849-45b3-a501-f9a3412da598" 00:18:14.683 ], 00:18:14.683 "product_name": "Raid Volume", 00:18:14.683 "block_size": 512, 00:18:14.683 "num_blocks": 196608, 00:18:14.683 "uuid": "fd94ef7b-3849-45b3-a501-f9a3412da598", 00:18:14.683 "assigned_rate_limits": { 00:18:14.683 "rw_ios_per_sec": 0, 00:18:14.683 "rw_mbytes_per_sec": 0, 00:18:14.683 "r_mbytes_per_sec": 0, 00:18:14.683 "w_mbytes_per_sec": 0 00:18:14.683 }, 00:18:14.683 "claimed": false, 00:18:14.683 "zoned": false, 00:18:14.683 "supported_io_types": { 00:18:14.683 "read": true, 00:18:14.683 "write": true, 00:18:14.683 "unmap": true, 00:18:14.683 "flush": true, 00:18:14.683 "reset": true, 00:18:14.683 "nvme_admin": false, 00:18:14.683 "nvme_io": false, 00:18:14.683 "nvme_io_md": false, 00:18:14.683 "write_zeroes": true, 00:18:14.683 "zcopy": false, 00:18:14.683 "get_zone_info": false, 00:18:14.683 "zone_management": false, 00:18:14.683 "zone_append": false, 00:18:14.683 "compare": false, 00:18:14.683 "compare_and_write": false, 00:18:14.683 "abort": false, 00:18:14.683 "seek_hole": false, 00:18:14.683 "seek_data": false, 00:18:14.683 "copy": false, 00:18:14.683 "nvme_iov_md": false 00:18:14.683 }, 00:18:14.683 "memory_domains": [ 00:18:14.683 { 00:18:14.683 "dma_device_id": "system", 00:18:14.683 "dma_device_type": 1 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.683 "dma_device_type": 2 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "system", 00:18:14.683 "dma_device_type": 1 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.683 "dma_device_type": 2 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "system", 00:18:14.683 "dma_device_type": 1 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.683 "dma_device_type": 2 00:18:14.683 } 00:18:14.683 ], 00:18:14.683 "driver_specific": { 00:18:14.683 "raid": { 00:18:14.683 "uuid": "fd94ef7b-3849-45b3-a501-f9a3412da598", 00:18:14.683 "strip_size_kb": 64, 00:18:14.683 "state": "online", 00:18:14.683 "raid_level": "concat", 00:18:14.683 "superblock": false, 00:18:14.683 "num_base_bdevs": 3, 00:18:14.683 "num_base_bdevs_discovered": 3, 00:18:14.683 "num_base_bdevs_operational": 3, 00:18:14.683 "base_bdevs_list": [ 00:18:14.683 { 00:18:14.683 "name": "NewBaseBdev", 00:18:14.683 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:14.683 "is_configured": true, 00:18:14.683 "data_offset": 0, 00:18:14.683 "data_size": 65536 00:18:14.683 }, 00:18:14.683 { 00:18:14.683 "name": "BaseBdev2", 00:18:14.684 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:14.684 "is_configured": true, 00:18:14.684 "data_offset": 0, 00:18:14.684 "data_size": 65536 00:18:14.684 }, 00:18:14.684 { 00:18:14.684 "name": "BaseBdev3", 00:18:14.684 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:14.684 "is_configured": true, 00:18:14.684 "data_offset": 0, 00:18:14.684 "data_size": 65536 00:18:14.684 } 00:18:14.684 ] 00:18:14.684 } 00:18:14.684 } 00:18:14.684 }' 00:18:14.684 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.684 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:14.684 BaseBdev2 00:18:14.684 BaseBdev3' 00:18:14.684 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.684 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.684 02:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:14.943 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.943 "name": "NewBaseBdev", 00:18:14.943 "aliases": [ 00:18:14.943 "064af4de-9ee5-4114-a0d9-a4037c925467" 00:18:14.943 ], 00:18:14.943 "product_name": "Malloc disk", 00:18:14.943 "block_size": 512, 00:18:14.943 "num_blocks": 65536, 00:18:14.943 "uuid": "064af4de-9ee5-4114-a0d9-a4037c925467", 00:18:14.943 "assigned_rate_limits": { 00:18:14.943 "rw_ios_per_sec": 0, 00:18:14.943 "rw_mbytes_per_sec": 0, 00:18:14.943 "r_mbytes_per_sec": 0, 00:18:14.943 "w_mbytes_per_sec": 0 00:18:14.943 }, 00:18:14.943 "claimed": true, 00:18:14.943 "claim_type": "exclusive_write", 00:18:14.943 "zoned": false, 00:18:14.943 "supported_io_types": { 00:18:14.943 "read": true, 00:18:14.943 "write": true, 00:18:14.943 "unmap": true, 00:18:14.943 "flush": true, 00:18:14.943 "reset": true, 00:18:14.943 "nvme_admin": false, 00:18:14.943 "nvme_io": false, 00:18:14.943 "nvme_io_md": false, 00:18:14.943 "write_zeroes": true, 00:18:14.943 "zcopy": true, 00:18:14.943 "get_zone_info": false, 00:18:14.943 "zone_management": false, 00:18:14.943 "zone_append": false, 00:18:14.943 "compare": false, 00:18:14.943 "compare_and_write": false, 00:18:14.943 "abort": true, 00:18:14.943 "seek_hole": false, 00:18:14.943 "seek_data": false, 00:18:14.943 "copy": true, 00:18:14.943 "nvme_iov_md": false 00:18:14.943 }, 00:18:14.943 "memory_domains": [ 00:18:14.943 { 00:18:14.943 "dma_device_id": "system", 00:18:14.943 "dma_device_type": 1 00:18:14.943 }, 00:18:14.943 { 00:18:14.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.943 "dma_device_type": 2 00:18:14.943 } 00:18:14.943 ], 00:18:14.943 "driver_specific": {} 00:18:14.943 }' 00:18:14.943 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.943 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.943 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.943 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.943 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:15.202 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.461 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.461 "name": "BaseBdev2", 00:18:15.461 "aliases": [ 00:18:15.461 "0fd2f3ed-3e66-4b51-aebd-160e318f384a" 00:18:15.461 ], 00:18:15.461 "product_name": "Malloc disk", 00:18:15.461 "block_size": 512, 00:18:15.461 "num_blocks": 65536, 00:18:15.461 "uuid": "0fd2f3ed-3e66-4b51-aebd-160e318f384a", 00:18:15.461 "assigned_rate_limits": { 00:18:15.461 "rw_ios_per_sec": 0, 00:18:15.461 "rw_mbytes_per_sec": 0, 00:18:15.461 "r_mbytes_per_sec": 0, 00:18:15.461 "w_mbytes_per_sec": 0 00:18:15.461 }, 00:18:15.461 "claimed": true, 00:18:15.461 "claim_type": "exclusive_write", 00:18:15.461 "zoned": false, 00:18:15.461 "supported_io_types": { 00:18:15.461 "read": true, 00:18:15.461 "write": true, 00:18:15.461 "unmap": true, 00:18:15.461 "flush": true, 00:18:15.461 "reset": true, 00:18:15.461 "nvme_admin": false, 00:18:15.461 "nvme_io": false, 00:18:15.461 "nvme_io_md": false, 00:18:15.461 "write_zeroes": true, 00:18:15.461 "zcopy": true, 00:18:15.461 "get_zone_info": false, 00:18:15.461 "zone_management": false, 00:18:15.461 "zone_append": false, 00:18:15.461 "compare": false, 00:18:15.461 "compare_and_write": false, 00:18:15.461 "abort": true, 00:18:15.461 "seek_hole": false, 00:18:15.461 "seek_data": false, 00:18:15.461 "copy": true, 00:18:15.461 "nvme_iov_md": false 00:18:15.461 }, 00:18:15.461 "memory_domains": [ 00:18:15.461 { 00:18:15.461 "dma_device_id": "system", 00:18:15.461 "dma_device_type": 1 00:18:15.461 }, 00:18:15.461 { 00:18:15.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.461 "dma_device_type": 2 00:18:15.461 } 00:18:15.461 ], 00:18:15.461 "driver_specific": {} 00:18:15.461 }' 00:18:15.461 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.461 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.720 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.720 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.720 02:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.720 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.720 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.720 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.720 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.720 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.720 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.979 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.979 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.979 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:15.979 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:16.238 "name": "BaseBdev3", 00:18:16.238 "aliases": [ 00:18:16.238 "31e4ee89-70ec-48f4-b62f-ceab737c0d94" 00:18:16.238 ], 00:18:16.238 "product_name": "Malloc disk", 00:18:16.238 "block_size": 512, 00:18:16.238 "num_blocks": 65536, 00:18:16.238 "uuid": "31e4ee89-70ec-48f4-b62f-ceab737c0d94", 00:18:16.238 "assigned_rate_limits": { 00:18:16.238 "rw_ios_per_sec": 0, 00:18:16.238 "rw_mbytes_per_sec": 0, 00:18:16.238 "r_mbytes_per_sec": 0, 00:18:16.238 "w_mbytes_per_sec": 0 00:18:16.238 }, 00:18:16.238 "claimed": true, 00:18:16.238 "claim_type": "exclusive_write", 00:18:16.238 "zoned": false, 00:18:16.238 "supported_io_types": { 00:18:16.238 "read": true, 00:18:16.238 "write": true, 00:18:16.238 "unmap": true, 00:18:16.238 "flush": true, 00:18:16.238 "reset": true, 00:18:16.238 "nvme_admin": false, 00:18:16.238 "nvme_io": false, 00:18:16.238 "nvme_io_md": false, 00:18:16.238 "write_zeroes": true, 00:18:16.238 "zcopy": true, 00:18:16.238 "get_zone_info": false, 00:18:16.238 "zone_management": false, 00:18:16.238 "zone_append": false, 00:18:16.238 "compare": false, 00:18:16.238 "compare_and_write": false, 00:18:16.238 "abort": true, 00:18:16.238 "seek_hole": false, 00:18:16.238 "seek_data": false, 00:18:16.238 "copy": true, 00:18:16.238 "nvme_iov_md": false 00:18:16.238 }, 00:18:16.238 "memory_domains": [ 00:18:16.238 { 00:18:16.238 "dma_device_id": "system", 00:18:16.238 "dma_device_type": 1 00:18:16.238 }, 00:18:16.238 { 00:18:16.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.238 "dma_device_type": 2 00:18:16.238 } 00:18:16.238 ], 00:18:16.238 "driver_specific": {} 00:18:16.238 }' 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.238 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.497 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.497 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.497 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.497 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.497 02:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.757 [2024-07-11 02:24:06.987409] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.757 [2024-07-11 02:24:06.987435] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.757 [2024-07-11 02:24:06.987495] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.757 [2024-07-11 02:24:06.987544] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.757 [2024-07-11 02:24:06.987557] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10ddcf0 name Existed_Raid, state offline 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1926542 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1926542 ']' 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1926542 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1926542 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1926542' 00:18:16.757 killing process with pid 1926542 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1926542 00:18:16.757 [2024-07-11 02:24:07.055277] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.757 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1926542 00:18:16.757 [2024-07-11 02:24:07.085471] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.023 02:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:17.023 00:18:17.023 real 0m30.683s 00:18:17.023 user 0m56.307s 00:18:17.023 sys 0m5.508s 00:18:17.023 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.023 02:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.024 ************************************ 00:18:17.024 END TEST raid_state_function_test 00:18:17.024 ************************************ 00:18:17.024 02:24:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:17.024 02:24:07 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:18:17.024 02:24:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:17.024 02:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.024 02:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.024 ************************************ 00:18:17.024 START TEST raid_state_function_test_sb 00:18:17.024 ************************************ 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1931529 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1931529' 00:18:17.024 Process raid pid: 1931529 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1931529 /var/tmp/spdk-raid.sock 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1931529 ']' 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.024 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.288 [2024-07-11 02:24:07.444551] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:18:17.288 [2024-07-11 02:24:07.444616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.288 [2024-07-11 02:24:07.576350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.288 [2024-07-11 02:24:07.628216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.288 [2024-07-11 02:24:07.702492] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.288 [2024-07-11 02:24:07.702524] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.547 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.547 02:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:17.547 02:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:17.806 [2024-07-11 02:24:08.025485] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.806 [2024-07-11 02:24:08.025525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.806 [2024-07-11 02:24:08.025536] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.806 [2024-07-11 02:24:08.025548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.806 [2024-07-11 02:24:08.025557] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:17.806 [2024-07-11 02:24:08.025568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.806 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.065 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.065 "name": "Existed_Raid", 00:18:18.065 "uuid": "b4838c8c-2916-4beb-aa1e-21c5066dd4b3", 00:18:18.065 "strip_size_kb": 64, 00:18:18.065 "state": "configuring", 00:18:18.065 "raid_level": "concat", 00:18:18.065 "superblock": true, 00:18:18.065 "num_base_bdevs": 3, 00:18:18.065 "num_base_bdevs_discovered": 0, 00:18:18.065 "num_base_bdevs_operational": 3, 00:18:18.065 "base_bdevs_list": [ 00:18:18.066 { 00:18:18.066 "name": "BaseBdev1", 00:18:18.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.066 "is_configured": false, 00:18:18.066 "data_offset": 0, 00:18:18.066 "data_size": 0 00:18:18.066 }, 00:18:18.066 { 00:18:18.066 "name": "BaseBdev2", 00:18:18.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.066 "is_configured": false, 00:18:18.066 "data_offset": 0, 00:18:18.066 "data_size": 0 00:18:18.066 }, 00:18:18.066 { 00:18:18.066 "name": "BaseBdev3", 00:18:18.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.066 "is_configured": false, 00:18:18.066 "data_offset": 0, 00:18:18.066 "data_size": 0 00:18:18.066 } 00:18:18.066 ] 00:18:18.066 }' 00:18:18.066 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.066 02:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.633 02:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.892 [2024-07-11 02:24:09.120226] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.892 [2024-07-11 02:24:09.120255] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f885a0 name Existed_Raid, state configuring 00:18:18.892 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:19.151 [2024-07-11 02:24:09.369052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.151 [2024-07-11 02:24:09.369077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.151 [2024-07-11 02:24:09.369087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.151 [2024-07-11 02:24:09.369099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.151 [2024-07-11 02:24:09.369108] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:19.151 [2024-07-11 02:24:09.369119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:19.151 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.410 [2024-07-11 02:24:09.627492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.410 BaseBdev1 00:18:19.410 02:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:19.410 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:19.410 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:19.410 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:19.410 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:19.410 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:19.410 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.709 02:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.029 [ 00:18:20.029 { 00:18:20.029 "name": "BaseBdev1", 00:18:20.029 "aliases": [ 00:18:20.029 "5b08ba2c-b6ef-4bca-bcf9-058baf013439" 00:18:20.029 ], 00:18:20.029 "product_name": "Malloc disk", 00:18:20.029 "block_size": 512, 00:18:20.029 "num_blocks": 65536, 00:18:20.029 "uuid": "5b08ba2c-b6ef-4bca-bcf9-058baf013439", 00:18:20.029 "assigned_rate_limits": { 00:18:20.029 "rw_ios_per_sec": 0, 00:18:20.029 "rw_mbytes_per_sec": 0, 00:18:20.029 "r_mbytes_per_sec": 0, 00:18:20.029 "w_mbytes_per_sec": 0 00:18:20.029 }, 00:18:20.029 "claimed": true, 00:18:20.029 "claim_type": "exclusive_write", 00:18:20.029 "zoned": false, 00:18:20.029 "supported_io_types": { 00:18:20.029 "read": true, 00:18:20.029 "write": true, 00:18:20.029 "unmap": true, 00:18:20.029 "flush": true, 00:18:20.029 "reset": true, 00:18:20.029 "nvme_admin": false, 00:18:20.029 "nvme_io": false, 00:18:20.029 "nvme_io_md": false, 00:18:20.029 "write_zeroes": true, 00:18:20.029 "zcopy": true, 00:18:20.029 "get_zone_info": false, 00:18:20.029 "zone_management": false, 00:18:20.029 "zone_append": false, 00:18:20.029 "compare": false, 00:18:20.029 "compare_and_write": false, 00:18:20.029 "abort": true, 00:18:20.029 "seek_hole": false, 00:18:20.029 "seek_data": false, 00:18:20.029 "copy": true, 00:18:20.029 "nvme_iov_md": false 00:18:20.029 }, 00:18:20.029 "memory_domains": [ 00:18:20.029 { 00:18:20.029 "dma_device_id": "system", 00:18:20.029 "dma_device_type": 1 00:18:20.029 }, 00:18:20.029 { 00:18:20.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.029 "dma_device_type": 2 00:18:20.029 } 00:18:20.029 ], 00:18:20.029 "driver_specific": {} 00:18:20.029 } 00:18:20.029 ] 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.029 "name": "Existed_Raid", 00:18:20.029 "uuid": "9ad62e5f-b16f-40df-91d3-46b3cb5887b6", 00:18:20.029 "strip_size_kb": 64, 00:18:20.029 "state": "configuring", 00:18:20.029 "raid_level": "concat", 00:18:20.029 "superblock": true, 00:18:20.029 "num_base_bdevs": 3, 00:18:20.029 "num_base_bdevs_discovered": 1, 00:18:20.029 "num_base_bdevs_operational": 3, 00:18:20.029 "base_bdevs_list": [ 00:18:20.029 { 00:18:20.029 "name": "BaseBdev1", 00:18:20.029 "uuid": "5b08ba2c-b6ef-4bca-bcf9-058baf013439", 00:18:20.029 "is_configured": true, 00:18:20.029 "data_offset": 2048, 00:18:20.029 "data_size": 63488 00:18:20.029 }, 00:18:20.029 { 00:18:20.029 "name": "BaseBdev2", 00:18:20.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.029 "is_configured": false, 00:18:20.029 "data_offset": 0, 00:18:20.029 "data_size": 0 00:18:20.029 }, 00:18:20.029 { 00:18:20.029 "name": "BaseBdev3", 00:18:20.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.029 "is_configured": false, 00:18:20.029 "data_offset": 0, 00:18:20.029 "data_size": 0 00:18:20.029 } 00:18:20.029 ] 00:18:20.029 }' 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.029 02:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.617 02:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:20.876 [2024-07-11 02:24:11.051405] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.876 [2024-07-11 02:24:11.051443] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f87ed0 name Existed_Raid, state configuring 00:18:20.876 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:21.135 [2024-07-11 02:24:11.304109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.135 [2024-07-11 02:24:11.305505] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.135 [2024-07-11 02:24:11.305537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.135 [2024-07-11 02:24:11.305548] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.135 [2024-07-11 02:24:11.305560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.135 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.395 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.395 "name": "Existed_Raid", 00:18:21.395 "uuid": "1c227e85-fdf8-409a-8535-1c8101434a1b", 00:18:21.395 "strip_size_kb": 64, 00:18:21.395 "state": "configuring", 00:18:21.395 "raid_level": "concat", 00:18:21.395 "superblock": true, 00:18:21.395 "num_base_bdevs": 3, 00:18:21.395 "num_base_bdevs_discovered": 1, 00:18:21.395 "num_base_bdevs_operational": 3, 00:18:21.395 "base_bdevs_list": [ 00:18:21.395 { 00:18:21.395 "name": "BaseBdev1", 00:18:21.395 "uuid": "5b08ba2c-b6ef-4bca-bcf9-058baf013439", 00:18:21.395 "is_configured": true, 00:18:21.395 "data_offset": 2048, 00:18:21.395 "data_size": 63488 00:18:21.395 }, 00:18:21.395 { 00:18:21.395 "name": "BaseBdev2", 00:18:21.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.395 "is_configured": false, 00:18:21.395 "data_offset": 0, 00:18:21.395 "data_size": 0 00:18:21.395 }, 00:18:21.395 { 00:18:21.395 "name": "BaseBdev3", 00:18:21.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.395 "is_configured": false, 00:18:21.395 "data_offset": 0, 00:18:21.395 "data_size": 0 00:18:21.395 } 00:18:21.395 ] 00:18:21.395 }' 00:18:21.395 02:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.395 02:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.964 02:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:22.244 [2024-07-11 02:24:12.407493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.244 BaseBdev2 00:18:22.244 02:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:22.244 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:22.244 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:22.244 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:22.244 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:22.244 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:22.244 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.503 02:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:22.763 [ 00:18:22.763 { 00:18:22.763 "name": "BaseBdev2", 00:18:22.763 "aliases": [ 00:18:22.763 "488af4af-f98b-40fc-b624-b9b11985bd96" 00:18:22.763 ], 00:18:22.763 "product_name": "Malloc disk", 00:18:22.763 "block_size": 512, 00:18:22.763 "num_blocks": 65536, 00:18:22.763 "uuid": "488af4af-f98b-40fc-b624-b9b11985bd96", 00:18:22.763 "assigned_rate_limits": { 00:18:22.763 "rw_ios_per_sec": 0, 00:18:22.763 "rw_mbytes_per_sec": 0, 00:18:22.763 "r_mbytes_per_sec": 0, 00:18:22.763 "w_mbytes_per_sec": 0 00:18:22.763 }, 00:18:22.763 "claimed": true, 00:18:22.763 "claim_type": "exclusive_write", 00:18:22.763 "zoned": false, 00:18:22.763 "supported_io_types": { 00:18:22.763 "read": true, 00:18:22.763 "write": true, 00:18:22.763 "unmap": true, 00:18:22.763 "flush": true, 00:18:22.763 "reset": true, 00:18:22.763 "nvme_admin": false, 00:18:22.763 "nvme_io": false, 00:18:22.763 "nvme_io_md": false, 00:18:22.763 "write_zeroes": true, 00:18:22.763 "zcopy": true, 00:18:22.763 "get_zone_info": false, 00:18:22.763 "zone_management": false, 00:18:22.763 "zone_append": false, 00:18:22.763 "compare": false, 00:18:22.763 "compare_and_write": false, 00:18:22.763 "abort": true, 00:18:22.763 "seek_hole": false, 00:18:22.763 "seek_data": false, 00:18:22.763 "copy": true, 00:18:22.763 "nvme_iov_md": false 00:18:22.763 }, 00:18:22.763 "memory_domains": [ 00:18:22.763 { 00:18:22.763 "dma_device_id": "system", 00:18:22.763 "dma_device_type": 1 00:18:22.763 }, 00:18:22.763 { 00:18:22.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.763 "dma_device_type": 2 00:18:22.763 } 00:18:22.763 ], 00:18:22.763 "driver_specific": {} 00:18:22.763 } 00:18:22.763 ] 00:18:23.022 02:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:23.022 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:23.022 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:23.022 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.023 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.282 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.282 "name": "Existed_Raid", 00:18:23.282 "uuid": "1c227e85-fdf8-409a-8535-1c8101434a1b", 00:18:23.282 "strip_size_kb": 64, 00:18:23.282 "state": "configuring", 00:18:23.282 "raid_level": "concat", 00:18:23.282 "superblock": true, 00:18:23.282 "num_base_bdevs": 3, 00:18:23.282 "num_base_bdevs_discovered": 2, 00:18:23.282 "num_base_bdevs_operational": 3, 00:18:23.282 "base_bdevs_list": [ 00:18:23.282 { 00:18:23.282 "name": "BaseBdev1", 00:18:23.282 "uuid": "5b08ba2c-b6ef-4bca-bcf9-058baf013439", 00:18:23.282 "is_configured": true, 00:18:23.282 "data_offset": 2048, 00:18:23.282 "data_size": 63488 00:18:23.282 }, 00:18:23.282 { 00:18:23.282 "name": "BaseBdev2", 00:18:23.282 "uuid": "488af4af-f98b-40fc-b624-b9b11985bd96", 00:18:23.282 "is_configured": true, 00:18:23.282 "data_offset": 2048, 00:18:23.282 "data_size": 63488 00:18:23.282 }, 00:18:23.282 { 00:18:23.282 "name": "BaseBdev3", 00:18:23.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.282 "is_configured": false, 00:18:23.282 "data_offset": 0, 00:18:23.282 "data_size": 0 00:18:23.282 } 00:18:23.282 ] 00:18:23.282 }' 00:18:23.282 02:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.282 02:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.849 02:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:24.417 [2024-07-11 02:24:14.545400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:24.417 [2024-07-11 02:24:14.545559] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x213acf0 00:18:24.417 [2024-07-11 02:24:14.545573] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:24.417 [2024-07-11 02:24:14.545744] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f8b640 00:18:24.417 [2024-07-11 02:24:14.545874] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213acf0 00:18:24.417 [2024-07-11 02:24:14.545884] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x213acf0 00:18:24.417 [2024-07-11 02:24:14.545975] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.417 BaseBdev3 00:18:24.417 02:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:24.417 02:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:24.417 02:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:24.417 02:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:24.417 02:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:24.417 02:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:24.417 02:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.676 02:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:25.243 [ 00:18:25.243 { 00:18:25.243 "name": "BaseBdev3", 00:18:25.243 "aliases": [ 00:18:25.243 "93a22744-48f1-4ffc-a459-34987161d145" 00:18:25.243 ], 00:18:25.243 "product_name": "Malloc disk", 00:18:25.243 "block_size": 512, 00:18:25.243 "num_blocks": 65536, 00:18:25.243 "uuid": "93a22744-48f1-4ffc-a459-34987161d145", 00:18:25.243 "assigned_rate_limits": { 00:18:25.243 "rw_ios_per_sec": 0, 00:18:25.243 "rw_mbytes_per_sec": 0, 00:18:25.243 "r_mbytes_per_sec": 0, 00:18:25.243 "w_mbytes_per_sec": 0 00:18:25.243 }, 00:18:25.243 "claimed": true, 00:18:25.243 "claim_type": "exclusive_write", 00:18:25.243 "zoned": false, 00:18:25.243 "supported_io_types": { 00:18:25.243 "read": true, 00:18:25.243 "write": true, 00:18:25.243 "unmap": true, 00:18:25.243 "flush": true, 00:18:25.243 "reset": true, 00:18:25.243 "nvme_admin": false, 00:18:25.243 "nvme_io": false, 00:18:25.243 "nvme_io_md": false, 00:18:25.243 "write_zeroes": true, 00:18:25.243 "zcopy": true, 00:18:25.243 "get_zone_info": false, 00:18:25.243 "zone_management": false, 00:18:25.243 "zone_append": false, 00:18:25.243 "compare": false, 00:18:25.243 "compare_and_write": false, 00:18:25.243 "abort": true, 00:18:25.243 "seek_hole": false, 00:18:25.243 "seek_data": false, 00:18:25.243 "copy": true, 00:18:25.243 "nvme_iov_md": false 00:18:25.243 }, 00:18:25.243 "memory_domains": [ 00:18:25.243 { 00:18:25.243 "dma_device_id": "system", 00:18:25.243 "dma_device_type": 1 00:18:25.243 }, 00:18:25.243 { 00:18:25.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.243 "dma_device_type": 2 00:18:25.243 } 00:18:25.243 ], 00:18:25.243 "driver_specific": {} 00:18:25.243 } 00:18:25.243 ] 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.243 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.502 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.502 "name": "Existed_Raid", 00:18:25.502 "uuid": "1c227e85-fdf8-409a-8535-1c8101434a1b", 00:18:25.502 "strip_size_kb": 64, 00:18:25.502 "state": "online", 00:18:25.502 "raid_level": "concat", 00:18:25.502 "superblock": true, 00:18:25.502 "num_base_bdevs": 3, 00:18:25.502 "num_base_bdevs_discovered": 3, 00:18:25.502 "num_base_bdevs_operational": 3, 00:18:25.502 "base_bdevs_list": [ 00:18:25.502 { 00:18:25.502 "name": "BaseBdev1", 00:18:25.502 "uuid": "5b08ba2c-b6ef-4bca-bcf9-058baf013439", 00:18:25.502 "is_configured": true, 00:18:25.502 "data_offset": 2048, 00:18:25.502 "data_size": 63488 00:18:25.502 }, 00:18:25.502 { 00:18:25.502 "name": "BaseBdev2", 00:18:25.502 "uuid": "488af4af-f98b-40fc-b624-b9b11985bd96", 00:18:25.502 "is_configured": true, 00:18:25.502 "data_offset": 2048, 00:18:25.502 "data_size": 63488 00:18:25.502 }, 00:18:25.502 { 00:18:25.502 "name": "BaseBdev3", 00:18:25.502 "uuid": "93a22744-48f1-4ffc-a459-34987161d145", 00:18:25.502 "is_configured": true, 00:18:25.502 "data_offset": 2048, 00:18:25.502 "data_size": 63488 00:18:25.502 } 00:18:25.502 ] 00:18:25.502 }' 00:18:25.502 02:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.502 02:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:26.438 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:26.438 [2024-07-11 02:24:16.860022] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.698 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:26.698 "name": "Existed_Raid", 00:18:26.698 "aliases": [ 00:18:26.698 "1c227e85-fdf8-409a-8535-1c8101434a1b" 00:18:26.698 ], 00:18:26.698 "product_name": "Raid Volume", 00:18:26.698 "block_size": 512, 00:18:26.698 "num_blocks": 190464, 00:18:26.698 "uuid": "1c227e85-fdf8-409a-8535-1c8101434a1b", 00:18:26.698 "assigned_rate_limits": { 00:18:26.698 "rw_ios_per_sec": 0, 00:18:26.698 "rw_mbytes_per_sec": 0, 00:18:26.698 "r_mbytes_per_sec": 0, 00:18:26.698 "w_mbytes_per_sec": 0 00:18:26.698 }, 00:18:26.698 "claimed": false, 00:18:26.698 "zoned": false, 00:18:26.698 "supported_io_types": { 00:18:26.698 "read": true, 00:18:26.698 "write": true, 00:18:26.698 "unmap": true, 00:18:26.698 "flush": true, 00:18:26.698 "reset": true, 00:18:26.698 "nvme_admin": false, 00:18:26.698 "nvme_io": false, 00:18:26.698 "nvme_io_md": false, 00:18:26.698 "write_zeroes": true, 00:18:26.698 "zcopy": false, 00:18:26.698 "get_zone_info": false, 00:18:26.698 "zone_management": false, 00:18:26.698 "zone_append": false, 00:18:26.698 "compare": false, 00:18:26.698 "compare_and_write": false, 00:18:26.698 "abort": false, 00:18:26.698 "seek_hole": false, 00:18:26.698 "seek_data": false, 00:18:26.698 "copy": false, 00:18:26.698 "nvme_iov_md": false 00:18:26.698 }, 00:18:26.698 "memory_domains": [ 00:18:26.698 { 00:18:26.698 "dma_device_id": "system", 00:18:26.698 "dma_device_type": 1 00:18:26.698 }, 00:18:26.698 { 00:18:26.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.698 "dma_device_type": 2 00:18:26.698 }, 00:18:26.698 { 00:18:26.698 "dma_device_id": "system", 00:18:26.698 "dma_device_type": 1 00:18:26.698 }, 00:18:26.698 { 00:18:26.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.698 "dma_device_type": 2 00:18:26.698 }, 00:18:26.698 { 00:18:26.698 "dma_device_id": "system", 00:18:26.698 "dma_device_type": 1 00:18:26.698 }, 00:18:26.698 { 00:18:26.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.698 "dma_device_type": 2 00:18:26.698 } 00:18:26.698 ], 00:18:26.698 "driver_specific": { 00:18:26.698 "raid": { 00:18:26.698 "uuid": "1c227e85-fdf8-409a-8535-1c8101434a1b", 00:18:26.698 "strip_size_kb": 64, 00:18:26.698 "state": "online", 00:18:26.698 "raid_level": "concat", 00:18:26.698 "superblock": true, 00:18:26.698 "num_base_bdevs": 3, 00:18:26.698 "num_base_bdevs_discovered": 3, 00:18:26.698 "num_base_bdevs_operational": 3, 00:18:26.698 "base_bdevs_list": [ 00:18:26.698 { 00:18:26.698 "name": "BaseBdev1", 00:18:26.698 "uuid": "5b08ba2c-b6ef-4bca-bcf9-058baf013439", 00:18:26.698 "is_configured": true, 00:18:26.698 "data_offset": 2048, 00:18:26.698 "data_size": 63488 00:18:26.698 }, 00:18:26.698 { 00:18:26.698 "name": "BaseBdev2", 00:18:26.698 "uuid": "488af4af-f98b-40fc-b624-b9b11985bd96", 00:18:26.698 "is_configured": true, 00:18:26.698 "data_offset": 2048, 00:18:26.698 "data_size": 63488 00:18:26.698 }, 00:18:26.698 { 00:18:26.698 "name": "BaseBdev3", 00:18:26.698 "uuid": "93a22744-48f1-4ffc-a459-34987161d145", 00:18:26.698 "is_configured": true, 00:18:26.698 "data_offset": 2048, 00:18:26.698 "data_size": 63488 00:18:26.698 } 00:18:26.698 ] 00:18:26.698 } 00:18:26.699 } 00:18:26.699 }' 00:18:26.699 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.699 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:26.699 BaseBdev2 00:18:26.699 BaseBdev3' 00:18:26.699 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:26.699 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:26.699 02:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:26.958 "name": "BaseBdev1", 00:18:26.958 "aliases": [ 00:18:26.958 "5b08ba2c-b6ef-4bca-bcf9-058baf013439" 00:18:26.958 ], 00:18:26.958 "product_name": "Malloc disk", 00:18:26.958 "block_size": 512, 00:18:26.958 "num_blocks": 65536, 00:18:26.958 "uuid": "5b08ba2c-b6ef-4bca-bcf9-058baf013439", 00:18:26.958 "assigned_rate_limits": { 00:18:26.958 "rw_ios_per_sec": 0, 00:18:26.958 "rw_mbytes_per_sec": 0, 00:18:26.958 "r_mbytes_per_sec": 0, 00:18:26.958 "w_mbytes_per_sec": 0 00:18:26.958 }, 00:18:26.958 "claimed": true, 00:18:26.958 "claim_type": "exclusive_write", 00:18:26.958 "zoned": false, 00:18:26.958 "supported_io_types": { 00:18:26.958 "read": true, 00:18:26.958 "write": true, 00:18:26.958 "unmap": true, 00:18:26.958 "flush": true, 00:18:26.958 "reset": true, 00:18:26.958 "nvme_admin": false, 00:18:26.958 "nvme_io": false, 00:18:26.958 "nvme_io_md": false, 00:18:26.958 "write_zeroes": true, 00:18:26.958 "zcopy": true, 00:18:26.958 "get_zone_info": false, 00:18:26.958 "zone_management": false, 00:18:26.958 "zone_append": false, 00:18:26.958 "compare": false, 00:18:26.958 "compare_and_write": false, 00:18:26.958 "abort": true, 00:18:26.958 "seek_hole": false, 00:18:26.958 "seek_data": false, 00:18:26.958 "copy": true, 00:18:26.958 "nvme_iov_md": false 00:18:26.958 }, 00:18:26.958 "memory_domains": [ 00:18:26.958 { 00:18:26.958 "dma_device_id": "system", 00:18:26.958 "dma_device_type": 1 00:18:26.958 }, 00:18:26.958 { 00:18:26.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.958 "dma_device_type": 2 00:18:26.958 } 00:18:26.958 ], 00:18:26.958 "driver_specific": {} 00:18:26.958 }' 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:26.958 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:27.217 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.475 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.475 "name": "BaseBdev2", 00:18:27.475 "aliases": [ 00:18:27.475 "488af4af-f98b-40fc-b624-b9b11985bd96" 00:18:27.475 ], 00:18:27.475 "product_name": "Malloc disk", 00:18:27.475 "block_size": 512, 00:18:27.475 "num_blocks": 65536, 00:18:27.475 "uuid": "488af4af-f98b-40fc-b624-b9b11985bd96", 00:18:27.475 "assigned_rate_limits": { 00:18:27.475 "rw_ios_per_sec": 0, 00:18:27.475 "rw_mbytes_per_sec": 0, 00:18:27.475 "r_mbytes_per_sec": 0, 00:18:27.475 "w_mbytes_per_sec": 0 00:18:27.475 }, 00:18:27.475 "claimed": true, 00:18:27.475 "claim_type": "exclusive_write", 00:18:27.475 "zoned": false, 00:18:27.475 "supported_io_types": { 00:18:27.475 "read": true, 00:18:27.475 "write": true, 00:18:27.475 "unmap": true, 00:18:27.475 "flush": true, 00:18:27.475 "reset": true, 00:18:27.475 "nvme_admin": false, 00:18:27.475 "nvme_io": false, 00:18:27.475 "nvme_io_md": false, 00:18:27.475 "write_zeroes": true, 00:18:27.475 "zcopy": true, 00:18:27.475 "get_zone_info": false, 00:18:27.475 "zone_management": false, 00:18:27.475 "zone_append": false, 00:18:27.475 "compare": false, 00:18:27.475 "compare_and_write": false, 00:18:27.475 "abort": true, 00:18:27.475 "seek_hole": false, 00:18:27.475 "seek_data": false, 00:18:27.475 "copy": true, 00:18:27.475 "nvme_iov_md": false 00:18:27.475 }, 00:18:27.475 "memory_domains": [ 00:18:27.475 { 00:18:27.475 "dma_device_id": "system", 00:18:27.475 "dma_device_type": 1 00:18:27.475 }, 00:18:27.475 { 00:18:27.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.475 "dma_device_type": 2 00:18:27.475 } 00:18:27.475 ], 00:18:27.475 "driver_specific": {} 00:18:27.475 }' 00:18:27.475 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.475 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.475 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.734 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.734 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.734 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.734 02:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.734 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.734 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.734 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.734 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.993 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.993 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.993 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:27.993 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.993 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.993 "name": "BaseBdev3", 00:18:27.993 "aliases": [ 00:18:27.993 "93a22744-48f1-4ffc-a459-34987161d145" 00:18:27.993 ], 00:18:27.993 "product_name": "Malloc disk", 00:18:27.993 "block_size": 512, 00:18:27.993 "num_blocks": 65536, 00:18:27.993 "uuid": "93a22744-48f1-4ffc-a459-34987161d145", 00:18:27.993 "assigned_rate_limits": { 00:18:27.993 "rw_ios_per_sec": 0, 00:18:27.993 "rw_mbytes_per_sec": 0, 00:18:27.993 "r_mbytes_per_sec": 0, 00:18:27.993 "w_mbytes_per_sec": 0 00:18:27.993 }, 00:18:27.993 "claimed": true, 00:18:27.993 "claim_type": "exclusive_write", 00:18:27.993 "zoned": false, 00:18:27.993 "supported_io_types": { 00:18:27.993 "read": true, 00:18:27.993 "write": true, 00:18:27.993 "unmap": true, 00:18:27.993 "flush": true, 00:18:27.993 "reset": true, 00:18:27.993 "nvme_admin": false, 00:18:27.993 "nvme_io": false, 00:18:27.993 "nvme_io_md": false, 00:18:27.993 "write_zeroes": true, 00:18:27.993 "zcopy": true, 00:18:27.993 "get_zone_info": false, 00:18:27.993 "zone_management": false, 00:18:27.993 "zone_append": false, 00:18:27.993 "compare": false, 00:18:27.993 "compare_and_write": false, 00:18:27.993 "abort": true, 00:18:27.993 "seek_hole": false, 00:18:27.993 "seek_data": false, 00:18:27.993 "copy": true, 00:18:27.993 "nvme_iov_md": false 00:18:27.993 }, 00:18:27.993 "memory_domains": [ 00:18:27.993 { 00:18:27.993 "dma_device_id": "system", 00:18:27.993 "dma_device_type": 1 00:18:27.993 }, 00:18:27.993 { 00:18:27.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.993 "dma_device_type": 2 00:18:27.993 } 00:18:27.993 ], 00:18:27.993 "driver_specific": {} 00:18:27.993 }' 00:18:27.993 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.993 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.252 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.511 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.511 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:28.511 [2024-07-11 02:24:18.913228] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:28.511 [2024-07-11 02:24:18.913255] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.511 [2024-07-11 02:24:18.913296] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.770 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:28.770 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.771 02:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.771 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.771 "name": "Existed_Raid", 00:18:28.771 "uuid": "1c227e85-fdf8-409a-8535-1c8101434a1b", 00:18:28.771 "strip_size_kb": 64, 00:18:28.771 "state": "offline", 00:18:28.771 "raid_level": "concat", 00:18:28.771 "superblock": true, 00:18:28.771 "num_base_bdevs": 3, 00:18:28.771 "num_base_bdevs_discovered": 2, 00:18:28.771 "num_base_bdevs_operational": 2, 00:18:28.771 "base_bdevs_list": [ 00:18:28.771 { 00:18:28.771 "name": null, 00:18:28.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.771 "is_configured": false, 00:18:28.771 "data_offset": 2048, 00:18:28.771 "data_size": 63488 00:18:28.771 }, 00:18:28.771 { 00:18:28.771 "name": "BaseBdev2", 00:18:28.771 "uuid": "488af4af-f98b-40fc-b624-b9b11985bd96", 00:18:28.771 "is_configured": true, 00:18:28.771 "data_offset": 2048, 00:18:28.771 "data_size": 63488 00:18:28.771 }, 00:18:28.771 { 00:18:28.771 "name": "BaseBdev3", 00:18:28.771 "uuid": "93a22744-48f1-4ffc-a459-34987161d145", 00:18:28.771 "is_configured": true, 00:18:28.771 "data_offset": 2048, 00:18:28.771 "data_size": 63488 00:18:28.771 } 00:18:28.771 ] 00:18:28.771 }' 00:18:28.771 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.771 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.705 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:29.705 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:29.705 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.705 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:29.705 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:29.705 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:29.705 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:29.963 [2024-07-11 02:24:20.334871] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:29.963 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:29.963 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:29.963 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.963 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:30.220 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:30.220 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.220 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:30.479 [2024-07-11 02:24:20.840418] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:30.479 [2024-07-11 02:24:20.840461] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213acf0 name Existed_Raid, state offline 00:18:30.479 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:30.479 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:30.479 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.479 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:30.737 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:30.737 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:30.737 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:30.737 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:30.737 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:30.737 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:30.996 BaseBdev2 00:18:30.996 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:30.996 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:30.996 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:30.996 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:30.996 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:30.996 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:30.996 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.255 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:31.514 [ 00:18:31.514 { 00:18:31.514 "name": "BaseBdev2", 00:18:31.514 "aliases": [ 00:18:31.514 "6cd1e0cc-d26c-438f-8659-371d23d91bd2" 00:18:31.514 ], 00:18:31.514 "product_name": "Malloc disk", 00:18:31.514 "block_size": 512, 00:18:31.514 "num_blocks": 65536, 00:18:31.514 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:31.514 "assigned_rate_limits": { 00:18:31.514 "rw_ios_per_sec": 0, 00:18:31.514 "rw_mbytes_per_sec": 0, 00:18:31.514 "r_mbytes_per_sec": 0, 00:18:31.514 "w_mbytes_per_sec": 0 00:18:31.514 }, 00:18:31.514 "claimed": false, 00:18:31.514 "zoned": false, 00:18:31.514 "supported_io_types": { 00:18:31.514 "read": true, 00:18:31.514 "write": true, 00:18:31.514 "unmap": true, 00:18:31.514 "flush": true, 00:18:31.514 "reset": true, 00:18:31.514 "nvme_admin": false, 00:18:31.514 "nvme_io": false, 00:18:31.514 "nvme_io_md": false, 00:18:31.514 "write_zeroes": true, 00:18:31.514 "zcopy": true, 00:18:31.514 "get_zone_info": false, 00:18:31.514 "zone_management": false, 00:18:31.514 "zone_append": false, 00:18:31.514 "compare": false, 00:18:31.514 "compare_and_write": false, 00:18:31.514 "abort": true, 00:18:31.514 "seek_hole": false, 00:18:31.514 "seek_data": false, 00:18:31.514 "copy": true, 00:18:31.514 "nvme_iov_md": false 00:18:31.514 }, 00:18:31.514 "memory_domains": [ 00:18:31.514 { 00:18:31.514 "dma_device_id": "system", 00:18:31.514 "dma_device_type": 1 00:18:31.514 }, 00:18:31.514 { 00:18:31.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.514 "dma_device_type": 2 00:18:31.514 } 00:18:31.514 ], 00:18:31.514 "driver_specific": {} 00:18:31.514 } 00:18:31.514 ] 00:18:31.515 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:31.515 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:31.515 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:31.515 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:31.774 BaseBdev3 00:18:31.774 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:31.774 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:31.774 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:31.774 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:31.774 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:31.774 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:31.774 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:32.033 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:32.291 [ 00:18:32.291 { 00:18:32.291 "name": "BaseBdev3", 00:18:32.291 "aliases": [ 00:18:32.291 "2f263156-1a6e-4cac-8def-35c9f5728de4" 00:18:32.291 ], 00:18:32.291 "product_name": "Malloc disk", 00:18:32.291 "block_size": 512, 00:18:32.291 "num_blocks": 65536, 00:18:32.291 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:32.291 "assigned_rate_limits": { 00:18:32.291 "rw_ios_per_sec": 0, 00:18:32.291 "rw_mbytes_per_sec": 0, 00:18:32.291 "r_mbytes_per_sec": 0, 00:18:32.291 "w_mbytes_per_sec": 0 00:18:32.291 }, 00:18:32.291 "claimed": false, 00:18:32.291 "zoned": false, 00:18:32.291 "supported_io_types": { 00:18:32.291 "read": true, 00:18:32.291 "write": true, 00:18:32.291 "unmap": true, 00:18:32.291 "flush": true, 00:18:32.291 "reset": true, 00:18:32.291 "nvme_admin": false, 00:18:32.291 "nvme_io": false, 00:18:32.291 "nvme_io_md": false, 00:18:32.291 "write_zeroes": true, 00:18:32.291 "zcopy": true, 00:18:32.291 "get_zone_info": false, 00:18:32.291 "zone_management": false, 00:18:32.291 "zone_append": false, 00:18:32.291 "compare": false, 00:18:32.291 "compare_and_write": false, 00:18:32.291 "abort": true, 00:18:32.291 "seek_hole": false, 00:18:32.291 "seek_data": false, 00:18:32.291 "copy": true, 00:18:32.291 "nvme_iov_md": false 00:18:32.291 }, 00:18:32.291 "memory_domains": [ 00:18:32.291 { 00:18:32.291 "dma_device_id": "system", 00:18:32.291 "dma_device_type": 1 00:18:32.291 }, 00:18:32.291 { 00:18:32.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.291 "dma_device_type": 2 00:18:32.291 } 00:18:32.291 ], 00:18:32.291 "driver_specific": {} 00:18:32.291 } 00:18:32.291 ] 00:18:32.291 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:32.291 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:32.291 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:32.291 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:32.550 [2024-07-11 02:24:22.777911] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.550 [2024-07-11 02:24:22.777952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.550 [2024-07-11 02:24:22.777973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.550 [2024-07-11 02:24:22.779253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.550 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.809 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.809 "name": "Existed_Raid", 00:18:32.809 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:32.809 "strip_size_kb": 64, 00:18:32.809 "state": "configuring", 00:18:32.809 "raid_level": "concat", 00:18:32.809 "superblock": true, 00:18:32.809 "num_base_bdevs": 3, 00:18:32.809 "num_base_bdevs_discovered": 2, 00:18:32.809 "num_base_bdevs_operational": 3, 00:18:32.809 "base_bdevs_list": [ 00:18:32.809 { 00:18:32.809 "name": "BaseBdev1", 00:18:32.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.809 "is_configured": false, 00:18:32.809 "data_offset": 0, 00:18:32.809 "data_size": 0 00:18:32.809 }, 00:18:32.809 { 00:18:32.809 "name": "BaseBdev2", 00:18:32.809 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:32.809 "is_configured": true, 00:18:32.809 "data_offset": 2048, 00:18:32.809 "data_size": 63488 00:18:32.809 }, 00:18:32.809 { 00:18:32.809 "name": "BaseBdev3", 00:18:32.809 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:32.809 "is_configured": true, 00:18:32.809 "data_offset": 2048, 00:18:32.809 "data_size": 63488 00:18:32.809 } 00:18:32.809 ] 00:18:32.809 }' 00:18:32.809 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.809 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.389 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:33.657 [2024-07-11 02:24:23.856731] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.657 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.916 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.916 "name": "Existed_Raid", 00:18:33.916 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:33.916 "strip_size_kb": 64, 00:18:33.916 "state": "configuring", 00:18:33.916 "raid_level": "concat", 00:18:33.916 "superblock": true, 00:18:33.916 "num_base_bdevs": 3, 00:18:33.916 "num_base_bdevs_discovered": 1, 00:18:33.916 "num_base_bdevs_operational": 3, 00:18:33.916 "base_bdevs_list": [ 00:18:33.916 { 00:18:33.916 "name": "BaseBdev1", 00:18:33.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.916 "is_configured": false, 00:18:33.916 "data_offset": 0, 00:18:33.916 "data_size": 0 00:18:33.916 }, 00:18:33.916 { 00:18:33.916 "name": null, 00:18:33.916 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:33.916 "is_configured": false, 00:18:33.916 "data_offset": 2048, 00:18:33.916 "data_size": 63488 00:18:33.916 }, 00:18:33.916 { 00:18:33.916 "name": "BaseBdev3", 00:18:33.916 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:33.916 "is_configured": true, 00:18:33.916 "data_offset": 2048, 00:18:33.916 "data_size": 63488 00:18:33.916 } 00:18:33.916 ] 00:18:33.916 }' 00:18:33.916 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.916 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.483 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.483 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:34.742 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:34.742 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:35.001 [2024-07-11 02:24:25.216703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.001 BaseBdev1 00:18:35.001 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:35.001 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:35.001 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:35.001 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:35.001 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:35.001 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:35.001 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.260 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.519 [ 00:18:35.519 { 00:18:35.519 "name": "BaseBdev1", 00:18:35.519 "aliases": [ 00:18:35.519 "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be" 00:18:35.519 ], 00:18:35.519 "product_name": "Malloc disk", 00:18:35.519 "block_size": 512, 00:18:35.519 "num_blocks": 65536, 00:18:35.519 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:35.519 "assigned_rate_limits": { 00:18:35.519 "rw_ios_per_sec": 0, 00:18:35.519 "rw_mbytes_per_sec": 0, 00:18:35.519 "r_mbytes_per_sec": 0, 00:18:35.519 "w_mbytes_per_sec": 0 00:18:35.519 }, 00:18:35.519 "claimed": true, 00:18:35.519 "claim_type": "exclusive_write", 00:18:35.519 "zoned": false, 00:18:35.519 "supported_io_types": { 00:18:35.519 "read": true, 00:18:35.519 "write": true, 00:18:35.519 "unmap": true, 00:18:35.519 "flush": true, 00:18:35.519 "reset": true, 00:18:35.519 "nvme_admin": false, 00:18:35.519 "nvme_io": false, 00:18:35.519 "nvme_io_md": false, 00:18:35.519 "write_zeroes": true, 00:18:35.519 "zcopy": true, 00:18:35.519 "get_zone_info": false, 00:18:35.519 "zone_management": false, 00:18:35.519 "zone_append": false, 00:18:35.519 "compare": false, 00:18:35.519 "compare_and_write": false, 00:18:35.519 "abort": true, 00:18:35.519 "seek_hole": false, 00:18:35.519 "seek_data": false, 00:18:35.519 "copy": true, 00:18:35.519 "nvme_iov_md": false 00:18:35.519 }, 00:18:35.519 "memory_domains": [ 00:18:35.519 { 00:18:35.519 "dma_device_id": "system", 00:18:35.519 "dma_device_type": 1 00:18:35.519 }, 00:18:35.519 { 00:18:35.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.519 "dma_device_type": 2 00:18:35.519 } 00:18:35.519 ], 00:18:35.519 "driver_specific": {} 00:18:35.519 } 00:18:35.519 ] 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.519 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.778 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.778 "name": "Existed_Raid", 00:18:35.778 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:35.778 "strip_size_kb": 64, 00:18:35.778 "state": "configuring", 00:18:35.778 "raid_level": "concat", 00:18:35.778 "superblock": true, 00:18:35.778 "num_base_bdevs": 3, 00:18:35.778 "num_base_bdevs_discovered": 2, 00:18:35.778 "num_base_bdevs_operational": 3, 00:18:35.778 "base_bdevs_list": [ 00:18:35.778 { 00:18:35.778 "name": "BaseBdev1", 00:18:35.778 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:35.778 "is_configured": true, 00:18:35.778 "data_offset": 2048, 00:18:35.778 "data_size": 63488 00:18:35.778 }, 00:18:35.778 { 00:18:35.778 "name": null, 00:18:35.778 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:35.778 "is_configured": false, 00:18:35.778 "data_offset": 2048, 00:18:35.778 "data_size": 63488 00:18:35.778 }, 00:18:35.778 { 00:18:35.778 "name": "BaseBdev3", 00:18:35.778 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:35.778 "is_configured": true, 00:18:35.778 "data_offset": 2048, 00:18:35.778 "data_size": 63488 00:18:35.778 } 00:18:35.778 ] 00:18:35.778 }' 00:18:35.778 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.778 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.345 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:36.345 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.604 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:36.604 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:36.864 [2024-07-11 02:24:27.069667] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.864 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.123 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.123 "name": "Existed_Raid", 00:18:37.123 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:37.123 "strip_size_kb": 64, 00:18:37.123 "state": "configuring", 00:18:37.123 "raid_level": "concat", 00:18:37.123 "superblock": true, 00:18:37.123 "num_base_bdevs": 3, 00:18:37.123 "num_base_bdevs_discovered": 1, 00:18:37.123 "num_base_bdevs_operational": 3, 00:18:37.123 "base_bdevs_list": [ 00:18:37.123 { 00:18:37.123 "name": "BaseBdev1", 00:18:37.123 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:37.123 "is_configured": true, 00:18:37.123 "data_offset": 2048, 00:18:37.123 "data_size": 63488 00:18:37.123 }, 00:18:37.123 { 00:18:37.123 "name": null, 00:18:37.123 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:37.123 "is_configured": false, 00:18:37.123 "data_offset": 2048, 00:18:37.123 "data_size": 63488 00:18:37.123 }, 00:18:37.123 { 00:18:37.123 "name": null, 00:18:37.123 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:37.123 "is_configured": false, 00:18:37.123 "data_offset": 2048, 00:18:37.123 "data_size": 63488 00:18:37.123 } 00:18:37.123 ] 00:18:37.123 }' 00:18:37.123 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.123 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.692 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.692 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:37.952 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:37.952 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:38.211 [2024-07-11 02:24:28.401229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.211 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.470 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.470 "name": "Existed_Raid", 00:18:38.470 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:38.470 "strip_size_kb": 64, 00:18:38.470 "state": "configuring", 00:18:38.470 "raid_level": "concat", 00:18:38.470 "superblock": true, 00:18:38.470 "num_base_bdevs": 3, 00:18:38.470 "num_base_bdevs_discovered": 2, 00:18:38.470 "num_base_bdevs_operational": 3, 00:18:38.470 "base_bdevs_list": [ 00:18:38.470 { 00:18:38.470 "name": "BaseBdev1", 00:18:38.470 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:38.470 "is_configured": true, 00:18:38.470 "data_offset": 2048, 00:18:38.470 "data_size": 63488 00:18:38.470 }, 00:18:38.470 { 00:18:38.470 "name": null, 00:18:38.470 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:38.470 "is_configured": false, 00:18:38.470 "data_offset": 2048, 00:18:38.470 "data_size": 63488 00:18:38.470 }, 00:18:38.470 { 00:18:38.470 "name": "BaseBdev3", 00:18:38.470 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:38.470 "is_configured": true, 00:18:38.470 "data_offset": 2048, 00:18:38.470 "data_size": 63488 00:18:38.470 } 00:18:38.470 ] 00:18:38.470 }' 00:18:38.470 02:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.470 02:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.038 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:39.038 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.297 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:39.297 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:39.556 [2024-07-11 02:24:29.764887] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.556 02:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.816 02:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.816 "name": "Existed_Raid", 00:18:39.816 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:39.816 "strip_size_kb": 64, 00:18:39.816 "state": "configuring", 00:18:39.816 "raid_level": "concat", 00:18:39.816 "superblock": true, 00:18:39.816 "num_base_bdevs": 3, 00:18:39.816 "num_base_bdevs_discovered": 1, 00:18:39.816 "num_base_bdevs_operational": 3, 00:18:39.816 "base_bdevs_list": [ 00:18:39.816 { 00:18:39.816 "name": null, 00:18:39.816 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:39.816 "is_configured": false, 00:18:39.816 "data_offset": 2048, 00:18:39.816 "data_size": 63488 00:18:39.816 }, 00:18:39.816 { 00:18:39.816 "name": null, 00:18:39.816 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:39.816 "is_configured": false, 00:18:39.816 "data_offset": 2048, 00:18:39.816 "data_size": 63488 00:18:39.816 }, 00:18:39.816 { 00:18:39.816 "name": "BaseBdev3", 00:18:39.816 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:39.816 "is_configured": true, 00:18:39.816 "data_offset": 2048, 00:18:39.816 "data_size": 63488 00:18:39.816 } 00:18:39.816 ] 00:18:39.816 }' 00:18:39.816 02:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.816 02:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.385 02:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.385 02:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:40.644 02:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:40.644 02:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:40.903 [2024-07-11 02:24:31.120795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.903 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.162 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.162 "name": "Existed_Raid", 00:18:41.162 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:41.162 "strip_size_kb": 64, 00:18:41.162 "state": "configuring", 00:18:41.162 "raid_level": "concat", 00:18:41.162 "superblock": true, 00:18:41.162 "num_base_bdevs": 3, 00:18:41.162 "num_base_bdevs_discovered": 2, 00:18:41.162 "num_base_bdevs_operational": 3, 00:18:41.162 "base_bdevs_list": [ 00:18:41.162 { 00:18:41.162 "name": null, 00:18:41.162 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:41.162 "is_configured": false, 00:18:41.162 "data_offset": 2048, 00:18:41.162 "data_size": 63488 00:18:41.162 }, 00:18:41.162 { 00:18:41.162 "name": "BaseBdev2", 00:18:41.162 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:41.162 "is_configured": true, 00:18:41.162 "data_offset": 2048, 00:18:41.162 "data_size": 63488 00:18:41.162 }, 00:18:41.162 { 00:18:41.162 "name": "BaseBdev3", 00:18:41.162 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:41.162 "is_configured": true, 00:18:41.162 "data_offset": 2048, 00:18:41.162 "data_size": 63488 00:18:41.162 } 00:18:41.162 ] 00:18:41.162 }' 00:18:41.162 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.162 02:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.746 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.746 02:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:42.016 02:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:42.016 02:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.016 02:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:42.275 02:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1f0d97a2-33ee-4eee-bcac-5c637cf3d0be 00:18:42.534 [2024-07-11 02:24:32.724298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:42.534 [2024-07-11 02:24:32.724439] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f8ba70 00:18:42.534 [2024-07-11 02:24:32.724452] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:42.534 [2024-07-11 02:24:32.724615] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f87850 00:18:42.534 [2024-07-11 02:24:32.724732] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f8ba70 00:18:42.534 [2024-07-11 02:24:32.724743] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1f8ba70 00:18:42.534 [2024-07-11 02:24:32.724864] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.534 NewBaseBdev 00:18:42.534 02:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:42.534 02:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:42.534 02:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:42.534 02:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:42.534 02:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:42.534 02:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:42.534 02:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.794 02:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:42.794 [ 00:18:42.794 { 00:18:42.794 "name": "NewBaseBdev", 00:18:42.794 "aliases": [ 00:18:42.794 "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be" 00:18:42.794 ], 00:18:42.794 "product_name": "Malloc disk", 00:18:42.794 "block_size": 512, 00:18:42.794 "num_blocks": 65536, 00:18:42.794 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:42.794 "assigned_rate_limits": { 00:18:42.794 "rw_ios_per_sec": 0, 00:18:42.794 "rw_mbytes_per_sec": 0, 00:18:42.794 "r_mbytes_per_sec": 0, 00:18:42.794 "w_mbytes_per_sec": 0 00:18:42.794 }, 00:18:42.794 "claimed": true, 00:18:42.794 "claim_type": "exclusive_write", 00:18:42.794 "zoned": false, 00:18:42.794 "supported_io_types": { 00:18:42.794 "read": true, 00:18:42.794 "write": true, 00:18:42.794 "unmap": true, 00:18:42.794 "flush": true, 00:18:42.794 "reset": true, 00:18:42.794 "nvme_admin": false, 00:18:42.794 "nvme_io": false, 00:18:42.794 "nvme_io_md": false, 00:18:42.794 "write_zeroes": true, 00:18:42.794 "zcopy": true, 00:18:42.794 "get_zone_info": false, 00:18:42.794 "zone_management": false, 00:18:42.794 "zone_append": false, 00:18:42.794 "compare": false, 00:18:42.794 "compare_and_write": false, 00:18:42.794 "abort": true, 00:18:42.794 "seek_hole": false, 00:18:42.794 "seek_data": false, 00:18:42.794 "copy": true, 00:18:42.794 "nvme_iov_md": false 00:18:42.794 }, 00:18:42.794 "memory_domains": [ 00:18:42.794 { 00:18:42.794 "dma_device_id": "system", 00:18:42.794 "dma_device_type": 1 00:18:42.794 }, 00:18:42.794 { 00:18:42.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.794 "dma_device_type": 2 00:18:42.794 } 00:18:42.794 ], 00:18:42.794 "driver_specific": {} 00:18:42.794 } 00:18:42.794 ] 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.053 "name": "Existed_Raid", 00:18:43.053 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:43.053 "strip_size_kb": 64, 00:18:43.053 "state": "online", 00:18:43.053 "raid_level": "concat", 00:18:43.053 "superblock": true, 00:18:43.053 "num_base_bdevs": 3, 00:18:43.053 "num_base_bdevs_discovered": 3, 00:18:43.053 "num_base_bdevs_operational": 3, 00:18:43.053 "base_bdevs_list": [ 00:18:43.053 { 00:18:43.053 "name": "NewBaseBdev", 00:18:43.053 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:43.053 "is_configured": true, 00:18:43.053 "data_offset": 2048, 00:18:43.053 "data_size": 63488 00:18:43.053 }, 00:18:43.053 { 00:18:43.053 "name": "BaseBdev2", 00:18:43.053 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:43.053 "is_configured": true, 00:18:43.053 "data_offset": 2048, 00:18:43.053 "data_size": 63488 00:18:43.053 }, 00:18:43.053 { 00:18:43.053 "name": "BaseBdev3", 00:18:43.053 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:43.053 "is_configured": true, 00:18:43.053 "data_offset": 2048, 00:18:43.053 "data_size": 63488 00:18:43.053 } 00:18:43.053 ] 00:18:43.053 }' 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.053 02:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:43.994 [2024-07-11 02:24:34.284755] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:43.994 "name": "Existed_Raid", 00:18:43.994 "aliases": [ 00:18:43.994 "35b6b138-5e5e-4923-bff4-945ca5ce1981" 00:18:43.994 ], 00:18:43.994 "product_name": "Raid Volume", 00:18:43.994 "block_size": 512, 00:18:43.994 "num_blocks": 190464, 00:18:43.994 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:43.994 "assigned_rate_limits": { 00:18:43.994 "rw_ios_per_sec": 0, 00:18:43.994 "rw_mbytes_per_sec": 0, 00:18:43.994 "r_mbytes_per_sec": 0, 00:18:43.994 "w_mbytes_per_sec": 0 00:18:43.994 }, 00:18:43.994 "claimed": false, 00:18:43.994 "zoned": false, 00:18:43.994 "supported_io_types": { 00:18:43.994 "read": true, 00:18:43.994 "write": true, 00:18:43.994 "unmap": true, 00:18:43.994 "flush": true, 00:18:43.994 "reset": true, 00:18:43.994 "nvme_admin": false, 00:18:43.994 "nvme_io": false, 00:18:43.994 "nvme_io_md": false, 00:18:43.994 "write_zeroes": true, 00:18:43.994 "zcopy": false, 00:18:43.994 "get_zone_info": false, 00:18:43.994 "zone_management": false, 00:18:43.994 "zone_append": false, 00:18:43.994 "compare": false, 00:18:43.994 "compare_and_write": false, 00:18:43.994 "abort": false, 00:18:43.994 "seek_hole": false, 00:18:43.994 "seek_data": false, 00:18:43.994 "copy": false, 00:18:43.994 "nvme_iov_md": false 00:18:43.994 }, 00:18:43.994 "memory_domains": [ 00:18:43.994 { 00:18:43.994 "dma_device_id": "system", 00:18:43.994 "dma_device_type": 1 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.994 "dma_device_type": 2 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "dma_device_id": "system", 00:18:43.994 "dma_device_type": 1 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.994 "dma_device_type": 2 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "dma_device_id": "system", 00:18:43.994 "dma_device_type": 1 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.994 "dma_device_type": 2 00:18:43.994 } 00:18:43.994 ], 00:18:43.994 "driver_specific": { 00:18:43.994 "raid": { 00:18:43.994 "uuid": "35b6b138-5e5e-4923-bff4-945ca5ce1981", 00:18:43.994 "strip_size_kb": 64, 00:18:43.994 "state": "online", 00:18:43.994 "raid_level": "concat", 00:18:43.994 "superblock": true, 00:18:43.994 "num_base_bdevs": 3, 00:18:43.994 "num_base_bdevs_discovered": 3, 00:18:43.994 "num_base_bdevs_operational": 3, 00:18:43.994 "base_bdevs_list": [ 00:18:43.994 { 00:18:43.994 "name": "NewBaseBdev", 00:18:43.994 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:43.994 "is_configured": true, 00:18:43.994 "data_offset": 2048, 00:18:43.994 "data_size": 63488 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "name": "BaseBdev2", 00:18:43.994 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:43.994 "is_configured": true, 00:18:43.994 "data_offset": 2048, 00:18:43.994 "data_size": 63488 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "name": "BaseBdev3", 00:18:43.994 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:43.994 "is_configured": true, 00:18:43.994 "data_offset": 2048, 00:18:43.994 "data_size": 63488 00:18:43.994 } 00:18:43.994 ] 00:18:43.994 } 00:18:43.994 } 00:18:43.994 }' 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:43.994 BaseBdev2 00:18:43.994 BaseBdev3' 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:43.994 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:44.254 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.254 "name": "NewBaseBdev", 00:18:44.254 "aliases": [ 00:18:44.254 "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be" 00:18:44.254 ], 00:18:44.254 "product_name": "Malloc disk", 00:18:44.254 "block_size": 512, 00:18:44.254 "num_blocks": 65536, 00:18:44.254 "uuid": "1f0d97a2-33ee-4eee-bcac-5c637cf3d0be", 00:18:44.254 "assigned_rate_limits": { 00:18:44.254 "rw_ios_per_sec": 0, 00:18:44.254 "rw_mbytes_per_sec": 0, 00:18:44.254 "r_mbytes_per_sec": 0, 00:18:44.254 "w_mbytes_per_sec": 0 00:18:44.254 }, 00:18:44.254 "claimed": true, 00:18:44.254 "claim_type": "exclusive_write", 00:18:44.254 "zoned": false, 00:18:44.254 "supported_io_types": { 00:18:44.254 "read": true, 00:18:44.254 "write": true, 00:18:44.254 "unmap": true, 00:18:44.254 "flush": true, 00:18:44.254 "reset": true, 00:18:44.254 "nvme_admin": false, 00:18:44.254 "nvme_io": false, 00:18:44.254 "nvme_io_md": false, 00:18:44.254 "write_zeroes": true, 00:18:44.254 "zcopy": true, 00:18:44.254 "get_zone_info": false, 00:18:44.254 "zone_management": false, 00:18:44.254 "zone_append": false, 00:18:44.254 "compare": false, 00:18:44.254 "compare_and_write": false, 00:18:44.254 "abort": true, 00:18:44.254 "seek_hole": false, 00:18:44.254 "seek_data": false, 00:18:44.254 "copy": true, 00:18:44.254 "nvme_iov_md": false 00:18:44.254 }, 00:18:44.254 "memory_domains": [ 00:18:44.254 { 00:18:44.254 "dma_device_id": "system", 00:18:44.254 "dma_device_type": 1 00:18:44.254 }, 00:18:44.254 { 00:18:44.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.254 "dma_device_type": 2 00:18:44.254 } 00:18:44.254 ], 00:18:44.254 "driver_specific": {} 00:18:44.254 }' 00:18:44.254 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.254 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.514 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.773 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.773 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.774 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:44.774 02:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.774 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.774 "name": "BaseBdev2", 00:18:44.774 "aliases": [ 00:18:44.774 "6cd1e0cc-d26c-438f-8659-371d23d91bd2" 00:18:44.774 ], 00:18:44.774 "product_name": "Malloc disk", 00:18:44.774 "block_size": 512, 00:18:44.774 "num_blocks": 65536, 00:18:44.774 "uuid": "6cd1e0cc-d26c-438f-8659-371d23d91bd2", 00:18:44.774 "assigned_rate_limits": { 00:18:44.774 "rw_ios_per_sec": 0, 00:18:44.774 "rw_mbytes_per_sec": 0, 00:18:44.774 "r_mbytes_per_sec": 0, 00:18:44.774 "w_mbytes_per_sec": 0 00:18:44.774 }, 00:18:44.774 "claimed": true, 00:18:44.774 "claim_type": "exclusive_write", 00:18:44.774 "zoned": false, 00:18:44.774 "supported_io_types": { 00:18:44.774 "read": true, 00:18:44.774 "write": true, 00:18:44.774 "unmap": true, 00:18:44.774 "flush": true, 00:18:44.774 "reset": true, 00:18:44.774 "nvme_admin": false, 00:18:44.774 "nvme_io": false, 00:18:44.774 "nvme_io_md": false, 00:18:44.774 "write_zeroes": true, 00:18:44.774 "zcopy": true, 00:18:44.774 "get_zone_info": false, 00:18:44.774 "zone_management": false, 00:18:44.774 "zone_append": false, 00:18:44.774 "compare": false, 00:18:44.774 "compare_and_write": false, 00:18:44.774 "abort": true, 00:18:44.774 "seek_hole": false, 00:18:44.774 "seek_data": false, 00:18:44.774 "copy": true, 00:18:44.774 "nvme_iov_md": false 00:18:44.774 }, 00:18:44.774 "memory_domains": [ 00:18:44.774 { 00:18:44.774 "dma_device_id": "system", 00:18:44.774 "dma_device_type": 1 00:18:44.774 }, 00:18:44.774 { 00:18:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.774 "dma_device_type": 2 00:18:44.774 } 00:18:44.774 ], 00:18:44.774 "driver_specific": {} 00:18:44.774 }' 00:18:44.774 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.774 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.033 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.292 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.292 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.292 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:45.292 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.292 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.292 "name": "BaseBdev3", 00:18:45.292 "aliases": [ 00:18:45.292 "2f263156-1a6e-4cac-8def-35c9f5728de4" 00:18:45.292 ], 00:18:45.292 "product_name": "Malloc disk", 00:18:45.292 "block_size": 512, 00:18:45.292 "num_blocks": 65536, 00:18:45.292 "uuid": "2f263156-1a6e-4cac-8def-35c9f5728de4", 00:18:45.292 "assigned_rate_limits": { 00:18:45.292 "rw_ios_per_sec": 0, 00:18:45.292 "rw_mbytes_per_sec": 0, 00:18:45.292 "r_mbytes_per_sec": 0, 00:18:45.292 "w_mbytes_per_sec": 0 00:18:45.292 }, 00:18:45.292 "claimed": true, 00:18:45.292 "claim_type": "exclusive_write", 00:18:45.292 "zoned": false, 00:18:45.292 "supported_io_types": { 00:18:45.292 "read": true, 00:18:45.292 "write": true, 00:18:45.292 "unmap": true, 00:18:45.292 "flush": true, 00:18:45.292 "reset": true, 00:18:45.292 "nvme_admin": false, 00:18:45.292 "nvme_io": false, 00:18:45.292 "nvme_io_md": false, 00:18:45.292 "write_zeroes": true, 00:18:45.292 "zcopy": true, 00:18:45.292 "get_zone_info": false, 00:18:45.292 "zone_management": false, 00:18:45.292 "zone_append": false, 00:18:45.292 "compare": false, 00:18:45.292 "compare_and_write": false, 00:18:45.292 "abort": true, 00:18:45.292 "seek_hole": false, 00:18:45.292 "seek_data": false, 00:18:45.292 "copy": true, 00:18:45.292 "nvme_iov_md": false 00:18:45.292 }, 00:18:45.292 "memory_domains": [ 00:18:45.292 { 00:18:45.292 "dma_device_id": "system", 00:18:45.292 "dma_device_type": 1 00:18:45.292 }, 00:18:45.292 { 00:18:45.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.292 "dma_device_type": 2 00:18:45.292 } 00:18:45.292 ], 00:18:45.292 "driver_specific": {} 00:18:45.292 }' 00:18:45.292 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.552 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.552 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.552 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.552 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.552 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.552 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.552 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.812 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.812 02:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.812 02:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.812 02:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.812 02:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:46.071 [2024-07-11 02:24:36.301827] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:46.071 [2024-07-11 02:24:36.301852] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.071 [2024-07-11 02:24:36.301901] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.071 [2024-07-11 02:24:36.301954] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.071 [2024-07-11 02:24:36.301966] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f8ba70 name Existed_Raid, state offline 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1931529 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1931529 ']' 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1931529 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1931529 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1931529' 00:18:46.071 killing process with pid 1931529 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1931529 00:18:46.071 [2024-07-11 02:24:36.369606] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.071 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1931529 00:18:46.071 [2024-07-11 02:24:36.396672] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.331 02:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:46.331 00:18:46.331 real 0m29.204s 00:18:46.331 user 0m53.919s 00:18:46.331 sys 0m5.275s 00:18:46.331 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.331 02:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.331 ************************************ 00:18:46.331 END TEST raid_state_function_test_sb 00:18:46.331 ************************************ 00:18:46.331 02:24:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:46.331 02:24:36 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:18:46.331 02:24:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:46.331 02:24:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.331 02:24:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.331 ************************************ 00:18:46.331 START TEST raid_superblock_test 00:18:46.331 ************************************ 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1935981 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1935981 /var/tmp/spdk-raid.sock 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1935981 ']' 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:46.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.331 02:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.331 [2024-07-11 02:24:36.726338] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:18:46.331 [2024-07-11 02:24:36.726414] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1935981 ] 00:18:46.589 [2024-07-11 02:24:36.878706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.589 [2024-07-11 02:24:36.927829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.589 [2024-07-11 02:24:36.986540] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.589 [2024-07-11 02:24:36.986573] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:47.522 02:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:48.087 malloc1 00:18:48.087 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:48.652 [2024-07-11 02:24:38.940132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:48.652 [2024-07-11 02:24:38.940182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.652 [2024-07-11 02:24:38.940208] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb43de0 00:18:48.652 [2024-07-11 02:24:38.940221] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.652 [2024-07-11 02:24:38.941941] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.652 [2024-07-11 02:24:38.941969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:48.652 pt1 00:18:48.652 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:48.652 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:48.652 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:48.653 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:48.653 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:48.653 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.653 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.653 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.653 02:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:49.220 malloc2 00:18:49.220 02:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:49.787 [2024-07-11 02:24:39.980837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:49.787 [2024-07-11 02:24:39.980883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.787 [2024-07-11 02:24:39.980901] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3b380 00:18:49.787 [2024-07-11 02:24:39.980913] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.787 [2024-07-11 02:24:39.982403] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.787 [2024-07-11 02:24:39.982431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:49.787 pt2 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:49.787 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:50.046 malloc3 00:18:50.046 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:50.046 [2024-07-11 02:24:40.422449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:50.046 [2024-07-11 02:24:40.422489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.046 [2024-07-11 02:24:40.422506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3dfb0 00:18:50.046 [2024-07-11 02:24:40.422519] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.046 [2024-07-11 02:24:40.423927] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.046 [2024-07-11 02:24:40.423958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:50.046 pt3 00:18:50.046 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:50.046 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:50.046 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:50.305 [2024-07-11 02:24:40.671126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:50.305 [2024-07-11 02:24:40.672434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.305 [2024-07-11 02:24:40.672486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:50.305 [2024-07-11 02:24:40.672629] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb402d0 00:18:50.305 [2024-07-11 02:24:40.672640] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:50.305 [2024-07-11 02:24:40.672844] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb3bd40 00:18:50.305 [2024-07-11 02:24:40.672983] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb402d0 00:18:50.305 [2024-07-11 02:24:40.672993] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb402d0 00:18:50.305 [2024-07-11 02:24:40.673088] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.305 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.564 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.564 "name": "raid_bdev1", 00:18:50.564 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:50.564 "strip_size_kb": 64, 00:18:50.564 "state": "online", 00:18:50.564 "raid_level": "concat", 00:18:50.564 "superblock": true, 00:18:50.564 "num_base_bdevs": 3, 00:18:50.564 "num_base_bdevs_discovered": 3, 00:18:50.564 "num_base_bdevs_operational": 3, 00:18:50.564 "base_bdevs_list": [ 00:18:50.564 { 00:18:50.564 "name": "pt1", 00:18:50.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.564 "is_configured": true, 00:18:50.564 "data_offset": 2048, 00:18:50.564 "data_size": 63488 00:18:50.564 }, 00:18:50.564 { 00:18:50.564 "name": "pt2", 00:18:50.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.564 "is_configured": true, 00:18:50.564 "data_offset": 2048, 00:18:50.564 "data_size": 63488 00:18:50.564 }, 00:18:50.564 { 00:18:50.564 "name": "pt3", 00:18:50.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:50.564 "is_configured": true, 00:18:50.564 "data_offset": 2048, 00:18:50.564 "data_size": 63488 00:18:50.564 } 00:18:50.564 ] 00:18:50.564 }' 00:18:50.564 02:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.564 02:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.132 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:51.132 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:51.133 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:51.133 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:51.133 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:51.133 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:51.133 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:51.133 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:51.392 [2024-07-11 02:24:41.678263] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.392 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:51.392 "name": "raid_bdev1", 00:18:51.392 "aliases": [ 00:18:51.392 "42e8be78-5460-4c19-88ff-d14f66202ef7" 00:18:51.392 ], 00:18:51.392 "product_name": "Raid Volume", 00:18:51.392 "block_size": 512, 00:18:51.392 "num_blocks": 190464, 00:18:51.392 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:51.392 "assigned_rate_limits": { 00:18:51.392 "rw_ios_per_sec": 0, 00:18:51.392 "rw_mbytes_per_sec": 0, 00:18:51.392 "r_mbytes_per_sec": 0, 00:18:51.392 "w_mbytes_per_sec": 0 00:18:51.392 }, 00:18:51.392 "claimed": false, 00:18:51.392 "zoned": false, 00:18:51.392 "supported_io_types": { 00:18:51.392 "read": true, 00:18:51.392 "write": true, 00:18:51.392 "unmap": true, 00:18:51.392 "flush": true, 00:18:51.392 "reset": true, 00:18:51.392 "nvme_admin": false, 00:18:51.392 "nvme_io": false, 00:18:51.392 "nvme_io_md": false, 00:18:51.392 "write_zeroes": true, 00:18:51.392 "zcopy": false, 00:18:51.392 "get_zone_info": false, 00:18:51.392 "zone_management": false, 00:18:51.392 "zone_append": false, 00:18:51.392 "compare": false, 00:18:51.392 "compare_and_write": false, 00:18:51.392 "abort": false, 00:18:51.392 "seek_hole": false, 00:18:51.392 "seek_data": false, 00:18:51.392 "copy": false, 00:18:51.392 "nvme_iov_md": false 00:18:51.392 }, 00:18:51.392 "memory_domains": [ 00:18:51.392 { 00:18:51.392 "dma_device_id": "system", 00:18:51.392 "dma_device_type": 1 00:18:51.392 }, 00:18:51.392 { 00:18:51.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.392 "dma_device_type": 2 00:18:51.392 }, 00:18:51.392 { 00:18:51.392 "dma_device_id": "system", 00:18:51.392 "dma_device_type": 1 00:18:51.392 }, 00:18:51.392 { 00:18:51.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.392 "dma_device_type": 2 00:18:51.392 }, 00:18:51.392 { 00:18:51.392 "dma_device_id": "system", 00:18:51.392 "dma_device_type": 1 00:18:51.392 }, 00:18:51.392 { 00:18:51.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.392 "dma_device_type": 2 00:18:51.392 } 00:18:51.392 ], 00:18:51.392 "driver_specific": { 00:18:51.392 "raid": { 00:18:51.392 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:51.392 "strip_size_kb": 64, 00:18:51.392 "state": "online", 00:18:51.392 "raid_level": "concat", 00:18:51.392 "superblock": true, 00:18:51.392 "num_base_bdevs": 3, 00:18:51.392 "num_base_bdevs_discovered": 3, 00:18:51.392 "num_base_bdevs_operational": 3, 00:18:51.392 "base_bdevs_list": [ 00:18:51.392 { 00:18:51.392 "name": "pt1", 00:18:51.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:51.392 "is_configured": true, 00:18:51.392 "data_offset": 2048, 00:18:51.392 "data_size": 63488 00:18:51.392 }, 00:18:51.392 { 00:18:51.392 "name": "pt2", 00:18:51.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.392 "is_configured": true, 00:18:51.392 "data_offset": 2048, 00:18:51.392 "data_size": 63488 00:18:51.392 }, 00:18:51.392 { 00:18:51.392 "name": "pt3", 00:18:51.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:51.392 "is_configured": true, 00:18:51.392 "data_offset": 2048, 00:18:51.392 "data_size": 63488 00:18:51.392 } 00:18:51.392 ] 00:18:51.392 } 00:18:51.392 } 00:18:51.392 }' 00:18:51.392 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:51.392 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:51.392 pt2 00:18:51.392 pt3' 00:18:51.392 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.392 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:51.392 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.652 02:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.652 "name": "pt1", 00:18:51.652 "aliases": [ 00:18:51.652 "00000000-0000-0000-0000-000000000001" 00:18:51.652 ], 00:18:51.652 "product_name": "passthru", 00:18:51.652 "block_size": 512, 00:18:51.652 "num_blocks": 65536, 00:18:51.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:51.652 "assigned_rate_limits": { 00:18:51.652 "rw_ios_per_sec": 0, 00:18:51.652 "rw_mbytes_per_sec": 0, 00:18:51.652 "r_mbytes_per_sec": 0, 00:18:51.652 "w_mbytes_per_sec": 0 00:18:51.652 }, 00:18:51.652 "claimed": true, 00:18:51.652 "claim_type": "exclusive_write", 00:18:51.652 "zoned": false, 00:18:51.652 "supported_io_types": { 00:18:51.652 "read": true, 00:18:51.652 "write": true, 00:18:51.652 "unmap": true, 00:18:51.652 "flush": true, 00:18:51.652 "reset": true, 00:18:51.652 "nvme_admin": false, 00:18:51.652 "nvme_io": false, 00:18:51.652 "nvme_io_md": false, 00:18:51.652 "write_zeroes": true, 00:18:51.652 "zcopy": true, 00:18:51.652 "get_zone_info": false, 00:18:51.652 "zone_management": false, 00:18:51.652 "zone_append": false, 00:18:51.652 "compare": false, 00:18:51.652 "compare_and_write": false, 00:18:51.652 "abort": true, 00:18:51.652 "seek_hole": false, 00:18:51.652 "seek_data": false, 00:18:51.652 "copy": true, 00:18:51.652 "nvme_iov_md": false 00:18:51.652 }, 00:18:51.652 "memory_domains": [ 00:18:51.652 { 00:18:51.652 "dma_device_id": "system", 00:18:51.652 "dma_device_type": 1 00:18:51.652 }, 00:18:51.652 { 00:18:51.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.652 "dma_device_type": 2 00:18:51.652 } 00:18:51.652 ], 00:18:51.652 "driver_specific": { 00:18:51.652 "passthru": { 00:18:51.652 "name": "pt1", 00:18:51.652 "base_bdev_name": "malloc1" 00:18:51.652 } 00:18:51.652 } 00:18:51.652 }' 00:18:51.652 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.652 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:51.912 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:52.171 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:52.171 "name": "pt2", 00:18:52.171 "aliases": [ 00:18:52.171 "00000000-0000-0000-0000-000000000002" 00:18:52.171 ], 00:18:52.171 "product_name": "passthru", 00:18:52.171 "block_size": 512, 00:18:52.171 "num_blocks": 65536, 00:18:52.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.171 "assigned_rate_limits": { 00:18:52.171 "rw_ios_per_sec": 0, 00:18:52.171 "rw_mbytes_per_sec": 0, 00:18:52.171 "r_mbytes_per_sec": 0, 00:18:52.171 "w_mbytes_per_sec": 0 00:18:52.171 }, 00:18:52.171 "claimed": true, 00:18:52.171 "claim_type": "exclusive_write", 00:18:52.171 "zoned": false, 00:18:52.171 "supported_io_types": { 00:18:52.171 "read": true, 00:18:52.171 "write": true, 00:18:52.171 "unmap": true, 00:18:52.171 "flush": true, 00:18:52.171 "reset": true, 00:18:52.171 "nvme_admin": false, 00:18:52.171 "nvme_io": false, 00:18:52.171 "nvme_io_md": false, 00:18:52.171 "write_zeroes": true, 00:18:52.171 "zcopy": true, 00:18:52.171 "get_zone_info": false, 00:18:52.171 "zone_management": false, 00:18:52.171 "zone_append": false, 00:18:52.171 "compare": false, 00:18:52.172 "compare_and_write": false, 00:18:52.172 "abort": true, 00:18:52.172 "seek_hole": false, 00:18:52.172 "seek_data": false, 00:18:52.172 "copy": true, 00:18:52.172 "nvme_iov_md": false 00:18:52.172 }, 00:18:52.172 "memory_domains": [ 00:18:52.172 { 00:18:52.172 "dma_device_id": "system", 00:18:52.172 "dma_device_type": 1 00:18:52.172 }, 00:18:52.172 { 00:18:52.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.172 "dma_device_type": 2 00:18:52.172 } 00:18:52.172 ], 00:18:52.172 "driver_specific": { 00:18:52.172 "passthru": { 00:18:52.172 "name": "pt2", 00:18:52.172 "base_bdev_name": "malloc2" 00:18:52.172 } 00:18:52.172 } 00:18:52.172 }' 00:18:52.172 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.172 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.172 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.172 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:52.431 02:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:52.689 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:52.689 "name": "pt3", 00:18:52.689 "aliases": [ 00:18:52.689 "00000000-0000-0000-0000-000000000003" 00:18:52.689 ], 00:18:52.689 "product_name": "passthru", 00:18:52.689 "block_size": 512, 00:18:52.689 "num_blocks": 65536, 00:18:52.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:52.689 "assigned_rate_limits": { 00:18:52.689 "rw_ios_per_sec": 0, 00:18:52.689 "rw_mbytes_per_sec": 0, 00:18:52.689 "r_mbytes_per_sec": 0, 00:18:52.689 "w_mbytes_per_sec": 0 00:18:52.689 }, 00:18:52.689 "claimed": true, 00:18:52.689 "claim_type": "exclusive_write", 00:18:52.689 "zoned": false, 00:18:52.689 "supported_io_types": { 00:18:52.689 "read": true, 00:18:52.689 "write": true, 00:18:52.689 "unmap": true, 00:18:52.689 "flush": true, 00:18:52.689 "reset": true, 00:18:52.689 "nvme_admin": false, 00:18:52.689 "nvme_io": false, 00:18:52.689 "nvme_io_md": false, 00:18:52.689 "write_zeroes": true, 00:18:52.689 "zcopy": true, 00:18:52.689 "get_zone_info": false, 00:18:52.689 "zone_management": false, 00:18:52.689 "zone_append": false, 00:18:52.689 "compare": false, 00:18:52.689 "compare_and_write": false, 00:18:52.689 "abort": true, 00:18:52.689 "seek_hole": false, 00:18:52.689 "seek_data": false, 00:18:52.689 "copy": true, 00:18:52.689 "nvme_iov_md": false 00:18:52.689 }, 00:18:52.689 "memory_domains": [ 00:18:52.689 { 00:18:52.689 "dma_device_id": "system", 00:18:52.689 "dma_device_type": 1 00:18:52.689 }, 00:18:52.689 { 00:18:52.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.689 "dma_device_type": 2 00:18:52.689 } 00:18:52.689 ], 00:18:52.689 "driver_specific": { 00:18:52.689 "passthru": { 00:18:52.690 "name": "pt3", 00:18:52.690 "base_bdev_name": "malloc3" 00:18:52.690 } 00:18:52.690 } 00:18:52.690 }' 00:18:52.690 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.690 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.948 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:53.207 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:53.207 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:53.207 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:53.207 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:53.207 [2024-07-11 02:24:43.579338] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.207 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=42e8be78-5460-4c19-88ff-d14f66202ef7 00:18:53.207 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 42e8be78-5460-4c19-88ff-d14f66202ef7 ']' 00:18:53.207 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:53.466 [2024-07-11 02:24:43.831741] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.466 [2024-07-11 02:24:43.831767] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.466 [2024-07-11 02:24:43.831815] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.466 [2024-07-11 02:24:43.831867] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.466 [2024-07-11 02:24:43.831879] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb402d0 name raid_bdev1, state offline 00:18:53.466 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.466 02:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:53.725 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:53.725 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:53.725 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:53.725 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:53.984 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:53.984 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:54.244 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:54.244 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:54.503 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:54.503 02:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:54.763 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:55.022 [2024-07-11 02:24:45.331650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:55.022 [2024-07-11 02:24:45.332972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:55.022 [2024-07-11 02:24:45.333015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:55.023 [2024-07-11 02:24:45.333059] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:55.023 [2024-07-11 02:24:45.333097] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:55.023 [2024-07-11 02:24:45.333120] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:55.023 [2024-07-11 02:24:45.333137] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.023 [2024-07-11 02:24:45.333147] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb44f10 name raid_bdev1, state configuring 00:18:55.023 request: 00:18:55.023 { 00:18:55.023 "name": "raid_bdev1", 00:18:55.023 "raid_level": "concat", 00:18:55.023 "base_bdevs": [ 00:18:55.023 "malloc1", 00:18:55.023 "malloc2", 00:18:55.023 "malloc3" 00:18:55.023 ], 00:18:55.023 "strip_size_kb": 64, 00:18:55.023 "superblock": false, 00:18:55.023 "method": "bdev_raid_create", 00:18:55.023 "req_id": 1 00:18:55.023 } 00:18:55.023 Got JSON-RPC error response 00:18:55.023 response: 00:18:55.023 { 00:18:55.023 "code": -17, 00:18:55.023 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:55.023 } 00:18:55.023 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:55.023 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:55.023 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:55.023 02:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:55.023 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.023 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:55.282 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:55.282 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:55.282 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.541 [2024-07-11 02:24:45.820879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.541 [2024-07-11 02:24:45.820924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.541 [2024-07-11 02:24:45.820943] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3ccb0 00:18:55.541 [2024-07-11 02:24:45.820955] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.541 [2024-07-11 02:24:45.822525] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.541 [2024-07-11 02:24:45.822562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.541 [2024-07-11 02:24:45.822624] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:55.541 [2024-07-11 02:24:45.822649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.541 pt1 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.542 02:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.800 02:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.800 "name": "raid_bdev1", 00:18:55.800 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:55.800 "strip_size_kb": 64, 00:18:55.800 "state": "configuring", 00:18:55.800 "raid_level": "concat", 00:18:55.800 "superblock": true, 00:18:55.800 "num_base_bdevs": 3, 00:18:55.800 "num_base_bdevs_discovered": 1, 00:18:55.800 "num_base_bdevs_operational": 3, 00:18:55.800 "base_bdevs_list": [ 00:18:55.800 { 00:18:55.800 "name": "pt1", 00:18:55.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.800 "is_configured": true, 00:18:55.800 "data_offset": 2048, 00:18:55.800 "data_size": 63488 00:18:55.800 }, 00:18:55.800 { 00:18:55.800 "name": null, 00:18:55.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.800 "is_configured": false, 00:18:55.800 "data_offset": 2048, 00:18:55.800 "data_size": 63488 00:18:55.800 }, 00:18:55.800 { 00:18:55.801 "name": null, 00:18:55.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.801 "is_configured": false, 00:18:55.801 "data_offset": 2048, 00:18:55.801 "data_size": 63488 00:18:55.801 } 00:18:55.801 ] 00:18:55.801 }' 00:18:55.801 02:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.801 02:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.368 02:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:18:56.368 02:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:56.627 [2024-07-11 02:24:46.919805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:56.627 [2024-07-11 02:24:46.919851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.627 [2024-07-11 02:24:46.919873] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3d820 00:18:56.627 [2024-07-11 02:24:46.919885] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.627 [2024-07-11 02:24:46.920207] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.627 [2024-07-11 02:24:46.920224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:56.627 [2024-07-11 02:24:46.920283] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:56.627 [2024-07-11 02:24:46.920302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.627 pt2 00:18:56.627 02:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:56.885 [2024-07-11 02:24:47.160445] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.885 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.144 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.144 "name": "raid_bdev1", 00:18:57.144 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:57.144 "strip_size_kb": 64, 00:18:57.144 "state": "configuring", 00:18:57.144 "raid_level": "concat", 00:18:57.144 "superblock": true, 00:18:57.144 "num_base_bdevs": 3, 00:18:57.144 "num_base_bdevs_discovered": 1, 00:18:57.144 "num_base_bdevs_operational": 3, 00:18:57.144 "base_bdevs_list": [ 00:18:57.144 { 00:18:57.144 "name": "pt1", 00:18:57.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.144 "is_configured": true, 00:18:57.144 "data_offset": 2048, 00:18:57.144 "data_size": 63488 00:18:57.144 }, 00:18:57.144 { 00:18:57.144 "name": null, 00:18:57.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.144 "is_configured": false, 00:18:57.144 "data_offset": 2048, 00:18:57.144 "data_size": 63488 00:18:57.144 }, 00:18:57.144 { 00:18:57.144 "name": null, 00:18:57.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.144 "is_configured": false, 00:18:57.144 "data_offset": 2048, 00:18:57.144 "data_size": 63488 00:18:57.144 } 00:18:57.144 ] 00:18:57.144 }' 00:18:57.144 02:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.144 02:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.713 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:57.713 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:57.713 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:57.972 [2024-07-11 02:24:48.195182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:57.972 [2024-07-11 02:24:48.195231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.972 [2024-07-11 02:24:48.195250] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3dc20 00:18:57.972 [2024-07-11 02:24:48.195262] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.972 [2024-07-11 02:24:48.195584] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.972 [2024-07-11 02:24:48.195600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:57.972 [2024-07-11 02:24:48.195658] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:57.972 [2024-07-11 02:24:48.195677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:57.972 pt2 00:18:57.972 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:57.972 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:57.972 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:57.972 [2024-07-11 02:24:48.375652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:57.972 [2024-07-11 02:24:48.375681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.972 [2024-07-11 02:24:48.375698] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3f040 00:18:57.972 [2024-07-11 02:24:48.375709] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.972 [2024-07-11 02:24:48.375981] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.972 [2024-07-11 02:24:48.375998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:57.972 [2024-07-11 02:24:48.376045] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:57.972 [2024-07-11 02:24:48.376063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:57.972 [2024-07-11 02:24:48.376159] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x990900 00:18:57.972 [2024-07-11 02:24:48.376175] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:57.972 [2024-07-11 02:24:48.376334] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x993810 00:18:57.972 [2024-07-11 02:24:48.376453] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x990900 00:18:57.972 [2024-07-11 02:24:48.376462] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x990900 00:18:57.972 [2024-07-11 02:24:48.376556] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.972 pt3 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.231 "name": "raid_bdev1", 00:18:58.231 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:58.231 "strip_size_kb": 64, 00:18:58.231 "state": "online", 00:18:58.231 "raid_level": "concat", 00:18:58.231 "superblock": true, 00:18:58.231 "num_base_bdevs": 3, 00:18:58.231 "num_base_bdevs_discovered": 3, 00:18:58.231 "num_base_bdevs_operational": 3, 00:18:58.231 "base_bdevs_list": [ 00:18:58.231 { 00:18:58.231 "name": "pt1", 00:18:58.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.231 "is_configured": true, 00:18:58.231 "data_offset": 2048, 00:18:58.231 "data_size": 63488 00:18:58.231 }, 00:18:58.231 { 00:18:58.231 "name": "pt2", 00:18:58.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.231 "is_configured": true, 00:18:58.231 "data_offset": 2048, 00:18:58.231 "data_size": 63488 00:18:58.231 }, 00:18:58.231 { 00:18:58.231 "name": "pt3", 00:18:58.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.231 "is_configured": true, 00:18:58.231 "data_offset": 2048, 00:18:58.231 "data_size": 63488 00:18:58.231 } 00:18:58.231 ] 00:18:58.231 }' 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.231 02:24:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:58.798 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:59.058 [2024-07-11 02:24:49.410668] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.058 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:59.058 "name": "raid_bdev1", 00:18:59.058 "aliases": [ 00:18:59.058 "42e8be78-5460-4c19-88ff-d14f66202ef7" 00:18:59.058 ], 00:18:59.058 "product_name": "Raid Volume", 00:18:59.058 "block_size": 512, 00:18:59.058 "num_blocks": 190464, 00:18:59.058 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:59.058 "assigned_rate_limits": { 00:18:59.058 "rw_ios_per_sec": 0, 00:18:59.058 "rw_mbytes_per_sec": 0, 00:18:59.058 "r_mbytes_per_sec": 0, 00:18:59.058 "w_mbytes_per_sec": 0 00:18:59.058 }, 00:18:59.058 "claimed": false, 00:18:59.058 "zoned": false, 00:18:59.058 "supported_io_types": { 00:18:59.058 "read": true, 00:18:59.058 "write": true, 00:18:59.058 "unmap": true, 00:18:59.058 "flush": true, 00:18:59.058 "reset": true, 00:18:59.058 "nvme_admin": false, 00:18:59.058 "nvme_io": false, 00:18:59.058 "nvme_io_md": false, 00:18:59.058 "write_zeroes": true, 00:18:59.058 "zcopy": false, 00:18:59.058 "get_zone_info": false, 00:18:59.058 "zone_management": false, 00:18:59.058 "zone_append": false, 00:18:59.058 "compare": false, 00:18:59.058 "compare_and_write": false, 00:18:59.058 "abort": false, 00:18:59.058 "seek_hole": false, 00:18:59.058 "seek_data": false, 00:18:59.058 "copy": false, 00:18:59.058 "nvme_iov_md": false 00:18:59.058 }, 00:18:59.058 "memory_domains": [ 00:18:59.058 { 00:18:59.058 "dma_device_id": "system", 00:18:59.058 "dma_device_type": 1 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.058 "dma_device_type": 2 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "dma_device_id": "system", 00:18:59.058 "dma_device_type": 1 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.058 "dma_device_type": 2 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "dma_device_id": "system", 00:18:59.058 "dma_device_type": 1 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.058 "dma_device_type": 2 00:18:59.058 } 00:18:59.058 ], 00:18:59.058 "driver_specific": { 00:18:59.058 "raid": { 00:18:59.058 "uuid": "42e8be78-5460-4c19-88ff-d14f66202ef7", 00:18:59.058 "strip_size_kb": 64, 00:18:59.058 "state": "online", 00:18:59.058 "raid_level": "concat", 00:18:59.058 "superblock": true, 00:18:59.058 "num_base_bdevs": 3, 00:18:59.058 "num_base_bdevs_discovered": 3, 00:18:59.058 "num_base_bdevs_operational": 3, 00:18:59.058 "base_bdevs_list": [ 00:18:59.058 { 00:18:59.058 "name": "pt1", 00:18:59.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.058 "is_configured": true, 00:18:59.058 "data_offset": 2048, 00:18:59.058 "data_size": 63488 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "name": "pt2", 00:18:59.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.058 "is_configured": true, 00:18:59.058 "data_offset": 2048, 00:18:59.058 "data_size": 63488 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "name": "pt3", 00:18:59.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.058 "is_configured": true, 00:18:59.058 "data_offset": 2048, 00:18:59.058 "data_size": 63488 00:18:59.058 } 00:18:59.058 ] 00:18:59.058 } 00:18:59.058 } 00:18:59.058 }' 00:18:59.058 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.058 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:59.058 pt2 00:18:59.058 pt3' 00:18:59.058 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.058 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:59.058 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.317 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.317 "name": "pt1", 00:18:59.317 "aliases": [ 00:18:59.317 "00000000-0000-0000-0000-000000000001" 00:18:59.317 ], 00:18:59.317 "product_name": "passthru", 00:18:59.317 "block_size": 512, 00:18:59.317 "num_blocks": 65536, 00:18:59.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.317 "assigned_rate_limits": { 00:18:59.317 "rw_ios_per_sec": 0, 00:18:59.317 "rw_mbytes_per_sec": 0, 00:18:59.317 "r_mbytes_per_sec": 0, 00:18:59.317 "w_mbytes_per_sec": 0 00:18:59.317 }, 00:18:59.317 "claimed": true, 00:18:59.317 "claim_type": "exclusive_write", 00:18:59.317 "zoned": false, 00:18:59.317 "supported_io_types": { 00:18:59.317 "read": true, 00:18:59.317 "write": true, 00:18:59.317 "unmap": true, 00:18:59.317 "flush": true, 00:18:59.317 "reset": true, 00:18:59.317 "nvme_admin": false, 00:18:59.317 "nvme_io": false, 00:18:59.317 "nvme_io_md": false, 00:18:59.317 "write_zeroes": true, 00:18:59.317 "zcopy": true, 00:18:59.317 "get_zone_info": false, 00:18:59.317 "zone_management": false, 00:18:59.317 "zone_append": false, 00:18:59.317 "compare": false, 00:18:59.317 "compare_and_write": false, 00:18:59.317 "abort": true, 00:18:59.317 "seek_hole": false, 00:18:59.317 "seek_data": false, 00:18:59.317 "copy": true, 00:18:59.317 "nvme_iov_md": false 00:18:59.317 }, 00:18:59.317 "memory_domains": [ 00:18:59.317 { 00:18:59.317 "dma_device_id": "system", 00:18:59.317 "dma_device_type": 1 00:18:59.317 }, 00:18:59.317 { 00:18:59.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.317 "dma_device_type": 2 00:18:59.317 } 00:18:59.317 ], 00:18:59.317 "driver_specific": { 00:18:59.317 "passthru": { 00:18:59.317 "name": "pt1", 00:18:59.317 "base_bdev_name": "malloc1" 00:18:59.317 } 00:18:59.317 } 00:18:59.317 }' 00:18:59.317 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.576 02:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.835 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.835 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.835 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.835 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:59.835 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:00.094 "name": "pt2", 00:19:00.094 "aliases": [ 00:19:00.094 "00000000-0000-0000-0000-000000000002" 00:19:00.094 ], 00:19:00.094 "product_name": "passthru", 00:19:00.094 "block_size": 512, 00:19:00.094 "num_blocks": 65536, 00:19:00.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.094 "assigned_rate_limits": { 00:19:00.094 "rw_ios_per_sec": 0, 00:19:00.094 "rw_mbytes_per_sec": 0, 00:19:00.094 "r_mbytes_per_sec": 0, 00:19:00.094 "w_mbytes_per_sec": 0 00:19:00.094 }, 00:19:00.094 "claimed": true, 00:19:00.094 "claim_type": "exclusive_write", 00:19:00.094 "zoned": false, 00:19:00.094 "supported_io_types": { 00:19:00.094 "read": true, 00:19:00.094 "write": true, 00:19:00.094 "unmap": true, 00:19:00.094 "flush": true, 00:19:00.094 "reset": true, 00:19:00.094 "nvme_admin": false, 00:19:00.094 "nvme_io": false, 00:19:00.094 "nvme_io_md": false, 00:19:00.094 "write_zeroes": true, 00:19:00.094 "zcopy": true, 00:19:00.094 "get_zone_info": false, 00:19:00.094 "zone_management": false, 00:19:00.094 "zone_append": false, 00:19:00.094 "compare": false, 00:19:00.094 "compare_and_write": false, 00:19:00.094 "abort": true, 00:19:00.094 "seek_hole": false, 00:19:00.094 "seek_data": false, 00:19:00.094 "copy": true, 00:19:00.094 "nvme_iov_md": false 00:19:00.094 }, 00:19:00.094 "memory_domains": [ 00:19:00.094 { 00:19:00.094 "dma_device_id": "system", 00:19:00.094 "dma_device_type": 1 00:19:00.094 }, 00:19:00.094 { 00:19:00.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.094 "dma_device_type": 2 00:19:00.094 } 00:19:00.094 ], 00:19:00.094 "driver_specific": { 00:19:00.094 "passthru": { 00:19:00.094 "name": "pt2", 00:19:00.094 "base_bdev_name": "malloc2" 00:19:00.094 } 00:19:00.094 } 00:19:00.094 }' 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:00.094 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:00.353 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:00.613 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:00.613 "name": "pt3", 00:19:00.613 "aliases": [ 00:19:00.613 "00000000-0000-0000-0000-000000000003" 00:19:00.613 ], 00:19:00.613 "product_name": "passthru", 00:19:00.613 "block_size": 512, 00:19:00.613 "num_blocks": 65536, 00:19:00.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:00.613 "assigned_rate_limits": { 00:19:00.613 "rw_ios_per_sec": 0, 00:19:00.613 "rw_mbytes_per_sec": 0, 00:19:00.613 "r_mbytes_per_sec": 0, 00:19:00.613 "w_mbytes_per_sec": 0 00:19:00.613 }, 00:19:00.613 "claimed": true, 00:19:00.613 "claim_type": "exclusive_write", 00:19:00.613 "zoned": false, 00:19:00.613 "supported_io_types": { 00:19:00.613 "read": true, 00:19:00.613 "write": true, 00:19:00.613 "unmap": true, 00:19:00.613 "flush": true, 00:19:00.613 "reset": true, 00:19:00.613 "nvme_admin": false, 00:19:00.613 "nvme_io": false, 00:19:00.613 "nvme_io_md": false, 00:19:00.613 "write_zeroes": true, 00:19:00.613 "zcopy": true, 00:19:00.613 "get_zone_info": false, 00:19:00.613 "zone_management": false, 00:19:00.613 "zone_append": false, 00:19:00.613 "compare": false, 00:19:00.613 "compare_and_write": false, 00:19:00.613 "abort": true, 00:19:00.613 "seek_hole": false, 00:19:00.613 "seek_data": false, 00:19:00.613 "copy": true, 00:19:00.613 "nvme_iov_md": false 00:19:00.613 }, 00:19:00.613 "memory_domains": [ 00:19:00.613 { 00:19:00.613 "dma_device_id": "system", 00:19:00.613 "dma_device_type": 1 00:19:00.613 }, 00:19:00.613 { 00:19:00.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.613 "dma_device_type": 2 00:19:00.613 } 00:19:00.613 ], 00:19:00.613 "driver_specific": { 00:19:00.613 "passthru": { 00:19:00.613 "name": "pt3", 00:19:00.613 "base_bdev_name": "malloc3" 00:19:00.613 } 00:19:00.613 } 00:19:00.613 }' 00:19:00.613 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.613 02:24:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.613 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:00.613 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:00.872 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:01.131 [2024-07-11 02:24:51.500197] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 42e8be78-5460-4c19-88ff-d14f66202ef7 '!=' 42e8be78-5460-4c19-88ff-d14f66202ef7 ']' 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1935981 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1935981 ']' 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1935981 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.131 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1935981 00:19:01.389 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:01.389 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:01.389 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1935981' 00:19:01.389 killing process with pid 1935981 00:19:01.389 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1935981 00:19:01.389 [2024-07-11 02:24:51.565477] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.389 [2024-07-11 02:24:51.565529] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.390 [2024-07-11 02:24:51.565579] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.390 [2024-07-11 02:24:51.565590] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x990900 name raid_bdev1, state offline 00:19:01.390 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1935981 00:19:01.390 [2024-07-11 02:24:51.595567] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.390 02:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:01.390 00:19:01.390 real 0m15.133s 00:19:01.390 user 0m27.243s 00:19:01.390 sys 0m2.759s 00:19:01.390 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.390 02:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.390 ************************************ 00:19:01.390 END TEST raid_superblock_test 00:19:01.390 ************************************ 00:19:01.648 02:24:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:01.648 02:24:51 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:01.648 02:24:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:01.648 02:24:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.648 02:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.648 ************************************ 00:19:01.648 START TEST raid_read_error_test 00:19:01.648 ************************************ 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:01.648 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.CRJyJNuMLC 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1938202 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1938202 /var/tmp/spdk-raid.sock 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1938202 ']' 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:01.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.649 02:24:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.649 [2024-07-11 02:24:51.960563] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:19:01.649 [2024-07-11 02:24:51.960625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938202 ] 00:19:01.908 [2024-07-11 02:24:52.090880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.908 [2024-07-11 02:24:52.142614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.908 [2024-07-11 02:24:52.207471] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.908 [2024-07-11 02:24:52.207505] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.908 02:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.908 02:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:01.908 02:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:01.908 02:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:02.476 BaseBdev1_malloc 00:19:02.476 02:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:02.735 true 00:19:02.735 02:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:03.404 [2024-07-11 02:24:53.503851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:03.404 [2024-07-11 02:24:53.503894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.404 [2024-07-11 02:24:53.503915] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2538330 00:19:03.405 [2024-07-11 02:24:53.503927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.405 [2024-07-11 02:24:53.505784] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.405 [2024-07-11 02:24:53.505813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:03.405 BaseBdev1 00:19:03.405 02:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:03.405 02:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:03.405 BaseBdev2_malloc 00:19:03.405 02:24:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:03.969 true 00:19:03.969 02:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:04.227 [2024-07-11 02:24:54.528196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:04.227 [2024-07-11 02:24:54.528239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.227 [2024-07-11 02:24:54.528257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2531b40 00:19:04.227 [2024-07-11 02:24:54.528269] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.227 [2024-07-11 02:24:54.529786] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.227 [2024-07-11 02:24:54.529815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:04.227 BaseBdev2 00:19:04.227 02:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:04.228 02:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:04.794 BaseBdev3_malloc 00:19:04.794 02:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:05.052 true 00:19:05.052 02:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:05.619 [2024-07-11 02:24:55.797159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:05.619 [2024-07-11 02:24:55.797203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.619 [2024-07-11 02:24:55.797222] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25350f0 00:19:05.619 [2024-07-11 02:24:55.797234] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.619 [2024-07-11 02:24:55.798772] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.619 [2024-07-11 02:24:55.798800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:05.619 BaseBdev3 00:19:05.619 02:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:05.877 [2024-07-11 02:24:56.053873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.877 [2024-07-11 02:24:56.055166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.877 [2024-07-11 02:24:56.055233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:05.877 [2024-07-11 02:24:56.055429] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2386870 00:19:05.877 [2024-07-11 02:24:56.055442] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:05.877 [2024-07-11 02:24:56.055628] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2384530 00:19:05.877 [2024-07-11 02:24:56.055783] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2386870 00:19:05.877 [2024-07-11 02:24:56.055794] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2386870 00:19:05.877 [2024-07-11 02:24:56.055895] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.877 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.136 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.136 "name": "raid_bdev1", 00:19:06.136 "uuid": "e580a876-8d26-4098-8574-8f1014a5fbc2", 00:19:06.136 "strip_size_kb": 64, 00:19:06.136 "state": "online", 00:19:06.136 "raid_level": "concat", 00:19:06.136 "superblock": true, 00:19:06.136 "num_base_bdevs": 3, 00:19:06.136 "num_base_bdevs_discovered": 3, 00:19:06.136 "num_base_bdevs_operational": 3, 00:19:06.136 "base_bdevs_list": [ 00:19:06.136 { 00:19:06.136 "name": "BaseBdev1", 00:19:06.136 "uuid": "3c303437-104b-5907-a998-92dd61ba57c0", 00:19:06.136 "is_configured": true, 00:19:06.136 "data_offset": 2048, 00:19:06.136 "data_size": 63488 00:19:06.136 }, 00:19:06.136 { 00:19:06.136 "name": "BaseBdev2", 00:19:06.136 "uuid": "6da01ca3-e089-5a35-9fef-989c90d864e2", 00:19:06.136 "is_configured": true, 00:19:06.136 "data_offset": 2048, 00:19:06.136 "data_size": 63488 00:19:06.136 }, 00:19:06.136 { 00:19:06.136 "name": "BaseBdev3", 00:19:06.136 "uuid": "9f1c57e9-b6ba-5ff4-826b-d3d1d7a6eae1", 00:19:06.136 "is_configured": true, 00:19:06.136 "data_offset": 2048, 00:19:06.136 "data_size": 63488 00:19:06.136 } 00:19:06.136 ] 00:19:06.136 }' 00:19:06.136 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.136 02:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.703 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:06.703 02:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:06.703 [2024-07-11 02:24:56.976552] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x252fbc0 00:19:07.642 02:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.901 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.161 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.161 "name": "raid_bdev1", 00:19:08.161 "uuid": "e580a876-8d26-4098-8574-8f1014a5fbc2", 00:19:08.161 "strip_size_kb": 64, 00:19:08.161 "state": "online", 00:19:08.161 "raid_level": "concat", 00:19:08.161 "superblock": true, 00:19:08.161 "num_base_bdevs": 3, 00:19:08.161 "num_base_bdevs_discovered": 3, 00:19:08.161 "num_base_bdevs_operational": 3, 00:19:08.161 "base_bdevs_list": [ 00:19:08.161 { 00:19:08.161 "name": "BaseBdev1", 00:19:08.161 "uuid": "3c303437-104b-5907-a998-92dd61ba57c0", 00:19:08.161 "is_configured": true, 00:19:08.161 "data_offset": 2048, 00:19:08.161 "data_size": 63488 00:19:08.161 }, 00:19:08.161 { 00:19:08.161 "name": "BaseBdev2", 00:19:08.161 "uuid": "6da01ca3-e089-5a35-9fef-989c90d864e2", 00:19:08.161 "is_configured": true, 00:19:08.161 "data_offset": 2048, 00:19:08.161 "data_size": 63488 00:19:08.161 }, 00:19:08.161 { 00:19:08.161 "name": "BaseBdev3", 00:19:08.161 "uuid": "9f1c57e9-b6ba-5ff4-826b-d3d1d7a6eae1", 00:19:08.161 "is_configured": true, 00:19:08.161 "data_offset": 2048, 00:19:08.161 "data_size": 63488 00:19:08.161 } 00:19:08.161 ] 00:19:08.161 }' 00:19:08.161 02:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.161 02:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:08.988 [2024-07-11 02:24:59.255619] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.988 [2024-07-11 02:24:59.255655] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.988 [2024-07-11 02:24:59.258809] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.988 [2024-07-11 02:24:59.258844] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.988 [2024-07-11 02:24:59.258876] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.988 [2024-07-11 02:24:59.258886] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2386870 name raid_bdev1, state offline 00:19:08.988 0 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1938202 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1938202 ']' 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1938202 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938202 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938202' 00:19:08.988 killing process with pid 1938202 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1938202 00:19:08.988 [2024-07-11 02:24:59.340134] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.988 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1938202 00:19:08.988 [2024-07-11 02:24:59.361038] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.CRJyJNuMLC 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:19:09.247 00:19:09.247 real 0m7.696s 00:19:09.247 user 0m12.756s 00:19:09.247 sys 0m1.371s 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:09.247 02:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.247 ************************************ 00:19:09.247 END TEST raid_read_error_test 00:19:09.247 ************************************ 00:19:09.247 02:24:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:09.247 02:24:59 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:19:09.247 02:24:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:09.247 02:24:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.247 02:24:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.247 ************************************ 00:19:09.247 START TEST raid_write_error_test 00:19:09.247 ************************************ 00:19:09.247 02:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:19:09.247 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:09.506 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.IqLOIr4sd7 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1939291 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1939291 /var/tmp/spdk-raid.sock 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1939291 ']' 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:09.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.507 02:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.507 [2024-07-11 02:24:59.748957] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:19:09.507 [2024-07-11 02:24:59.749026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939291 ] 00:19:09.507 [2024-07-11 02:24:59.884458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.766 [2024-07-11 02:24:59.933085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.766 [2024-07-11 02:24:59.990326] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.766 [2024-07-11 02:24:59.990359] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.334 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.334 02:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:10.334 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:10.334 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:10.593 BaseBdev1_malloc 00:19:10.593 02:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:10.853 true 00:19:10.853 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:11.112 [2024-07-11 02:25:01.419502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:11.112 [2024-07-11 02:25:01.419548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.112 [2024-07-11 02:25:01.419568] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbba330 00:19:11.112 [2024-07-11 02:25:01.419580] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.112 [2024-07-11 02:25:01.421240] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.112 [2024-07-11 02:25:01.421268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.112 BaseBdev1 00:19:11.112 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:11.112 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:11.372 BaseBdev2_malloc 00:19:11.372 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:11.631 true 00:19:11.631 02:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:11.890 [2024-07-11 02:25:02.161818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:11.890 [2024-07-11 02:25:02.161859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.890 [2024-07-11 02:25:02.161876] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb3b40 00:19:11.890 [2024-07-11 02:25:02.161893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.890 [2024-07-11 02:25:02.163232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.890 [2024-07-11 02:25:02.163258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.890 BaseBdev2 00:19:11.890 02:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:11.890 02:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:12.149 BaseBdev3_malloc 00:19:12.149 02:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:12.408 true 00:19:12.408 02:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:12.667 [2024-07-11 02:25:02.912399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:12.667 [2024-07-11 02:25:02.912442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.667 [2024-07-11 02:25:02.912463] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb70f0 00:19:12.667 [2024-07-11 02:25:02.912476] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.667 [2024-07-11 02:25:02.913907] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.667 [2024-07-11 02:25:02.913936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:12.667 BaseBdev3 00:19:12.667 02:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:12.926 [2024-07-11 02:25:03.157072] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.926 [2024-07-11 02:25:03.158292] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.926 [2024-07-11 02:25:03.158359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:12.926 [2024-07-11 02:25:03.158556] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa08870 00:19:12.926 [2024-07-11 02:25:03.158568] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:12.926 [2024-07-11 02:25:03.158766] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa06530 00:19:12.926 [2024-07-11 02:25:03.158910] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa08870 00:19:12.926 [2024-07-11 02:25:03.158920] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa08870 00:19:12.926 [2024-07-11 02:25:03.159017] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.926 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.185 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.185 "name": "raid_bdev1", 00:19:13.185 "uuid": "80c5f52a-ec03-41ad-aad9-7104d14caa55", 00:19:13.185 "strip_size_kb": 64, 00:19:13.185 "state": "online", 00:19:13.185 "raid_level": "concat", 00:19:13.185 "superblock": true, 00:19:13.185 "num_base_bdevs": 3, 00:19:13.185 "num_base_bdevs_discovered": 3, 00:19:13.185 "num_base_bdevs_operational": 3, 00:19:13.185 "base_bdevs_list": [ 00:19:13.185 { 00:19:13.185 "name": "BaseBdev1", 00:19:13.185 "uuid": "938b51c9-041c-5137-a77f-107e1edf2310", 00:19:13.185 "is_configured": true, 00:19:13.185 "data_offset": 2048, 00:19:13.185 "data_size": 63488 00:19:13.185 }, 00:19:13.185 { 00:19:13.185 "name": "BaseBdev2", 00:19:13.185 "uuid": "035f80f9-ba4d-500d-8f7c-5d33727b9ea5", 00:19:13.185 "is_configured": true, 00:19:13.185 "data_offset": 2048, 00:19:13.185 "data_size": 63488 00:19:13.185 }, 00:19:13.185 { 00:19:13.185 "name": "BaseBdev3", 00:19:13.185 "uuid": "1cd53d47-25be-52b6-ab67-60b4bfcd635b", 00:19:13.185 "is_configured": true, 00:19:13.185 "data_offset": 2048, 00:19:13.185 "data_size": 63488 00:19:13.185 } 00:19:13.185 ] 00:19:13.185 }' 00:19:13.185 02:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.185 02:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.752 02:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:13.752 02:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:13.752 [2024-07-11 02:25:04.172022] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbb1bc0 00:19:14.689 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.948 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.207 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.207 "name": "raid_bdev1", 00:19:15.207 "uuid": "80c5f52a-ec03-41ad-aad9-7104d14caa55", 00:19:15.207 "strip_size_kb": 64, 00:19:15.207 "state": "online", 00:19:15.207 "raid_level": "concat", 00:19:15.207 "superblock": true, 00:19:15.207 "num_base_bdevs": 3, 00:19:15.207 "num_base_bdevs_discovered": 3, 00:19:15.207 "num_base_bdevs_operational": 3, 00:19:15.207 "base_bdevs_list": [ 00:19:15.207 { 00:19:15.207 "name": "BaseBdev1", 00:19:15.207 "uuid": "938b51c9-041c-5137-a77f-107e1edf2310", 00:19:15.207 "is_configured": true, 00:19:15.207 "data_offset": 2048, 00:19:15.207 "data_size": 63488 00:19:15.207 }, 00:19:15.207 { 00:19:15.207 "name": "BaseBdev2", 00:19:15.207 "uuid": "035f80f9-ba4d-500d-8f7c-5d33727b9ea5", 00:19:15.207 "is_configured": true, 00:19:15.207 "data_offset": 2048, 00:19:15.207 "data_size": 63488 00:19:15.207 }, 00:19:15.207 { 00:19:15.207 "name": "BaseBdev3", 00:19:15.207 "uuid": "1cd53d47-25be-52b6-ab67-60b4bfcd635b", 00:19:15.207 "is_configured": true, 00:19:15.207 "data_offset": 2048, 00:19:15.207 "data_size": 63488 00:19:15.207 } 00:19:15.207 ] 00:19:15.207 }' 00:19:15.207 02:25:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.207 02:25:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.776 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:16.035 [2024-07-11 02:25:06.401858] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.035 [2024-07-11 02:25:06.401900] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.035 [2024-07-11 02:25:06.405074] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.035 [2024-07-11 02:25:06.405111] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.035 [2024-07-11 02:25:06.405144] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.035 [2024-07-11 02:25:06.405155] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa08870 name raid_bdev1, state offline 00:19:16.035 0 00:19:16.035 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1939291 00:19:16.035 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1939291 ']' 00:19:16.035 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1939291 00:19:16.035 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:16.035 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.035 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1939291 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1939291' 00:19:16.296 killing process with pid 1939291 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1939291 00:19:16.296 [2024-07-11 02:25:06.485281] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1939291 00:19:16.296 [2024-07-11 02:25:06.505342] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.IqLOIr4sd7 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:19:16.296 00:19:16.296 real 0m7.039s 00:19:16.296 user 0m11.147s 00:19:16.296 sys 0m1.290s 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.296 02:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.296 ************************************ 00:19:16.296 END TEST raid_write_error_test 00:19:16.296 ************************************ 00:19:16.555 02:25:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:16.555 02:25:06 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:16.555 02:25:06 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:16.555 02:25:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:16.555 02:25:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.555 02:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.555 ************************************ 00:19:16.555 START TEST raid_state_function_test 00:19:16.555 ************************************ 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1940326 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1940326' 00:19:16.555 Process raid pid: 1940326 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1940326 /var/tmp/spdk-raid.sock 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1940326 ']' 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:16.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.555 02:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.555 [2024-07-11 02:25:06.866091] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:19:16.555 [2024-07-11 02:25:06.866156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.814 [2024-07-11 02:25:07.003057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.814 [2024-07-11 02:25:07.051787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.814 [2024-07-11 02:25:07.106558] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.814 [2024-07-11 02:25:07.106581] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.073 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.073 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:19:17.073 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:17.332 [2024-07-11 02:25:07.555182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:17.333 [2024-07-11 02:25:07.555223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:17.333 [2024-07-11 02:25:07.555234] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:17.333 [2024-07-11 02:25:07.555246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:17.333 [2024-07-11 02:25:07.555255] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:17.333 [2024-07-11 02:25:07.555267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.333 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.592 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.592 "name": "Existed_Raid", 00:19:17.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.592 "strip_size_kb": 0, 00:19:17.592 "state": "configuring", 00:19:17.592 "raid_level": "raid1", 00:19:17.592 "superblock": false, 00:19:17.592 "num_base_bdevs": 3, 00:19:17.592 "num_base_bdevs_discovered": 0, 00:19:17.592 "num_base_bdevs_operational": 3, 00:19:17.592 "base_bdevs_list": [ 00:19:17.592 { 00:19:17.592 "name": "BaseBdev1", 00:19:17.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.592 "is_configured": false, 00:19:17.592 "data_offset": 0, 00:19:17.592 "data_size": 0 00:19:17.592 }, 00:19:17.592 { 00:19:17.592 "name": "BaseBdev2", 00:19:17.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.592 "is_configured": false, 00:19:17.592 "data_offset": 0, 00:19:17.592 "data_size": 0 00:19:17.592 }, 00:19:17.592 { 00:19:17.592 "name": "BaseBdev3", 00:19:17.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.592 "is_configured": false, 00:19:17.592 "data_offset": 0, 00:19:17.592 "data_size": 0 00:19:17.592 } 00:19:17.592 ] 00:19:17.592 }' 00:19:17.592 02:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.592 02:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.159 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:18.418 [2024-07-11 02:25:08.629888] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:18.418 [2024-07-11 02:25:08.629917] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18915a0 name Existed_Raid, state configuring 00:19:18.418 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:18.677 [2024-07-11 02:25:08.874558] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:18.677 [2024-07-11 02:25:08.874590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:18.677 [2024-07-11 02:25:08.874600] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:18.677 [2024-07-11 02:25:08.874612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:18.677 [2024-07-11 02:25:08.874621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:18.677 [2024-07-11 02:25:08.874632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:18.677 02:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:18.936 [2024-07-11 02:25:09.128936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.936 BaseBdev1 00:19:18.936 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:18.936 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:18.936 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:18.936 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:18.936 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:18.936 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:18.936 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.195 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:19.453 [ 00:19:19.453 { 00:19:19.454 "name": "BaseBdev1", 00:19:19.454 "aliases": [ 00:19:19.454 "da120cb4-2c5a-4a4c-91a2-9be7f7730abb" 00:19:19.454 ], 00:19:19.454 "product_name": "Malloc disk", 00:19:19.454 "block_size": 512, 00:19:19.454 "num_blocks": 65536, 00:19:19.454 "uuid": "da120cb4-2c5a-4a4c-91a2-9be7f7730abb", 00:19:19.454 "assigned_rate_limits": { 00:19:19.454 "rw_ios_per_sec": 0, 00:19:19.454 "rw_mbytes_per_sec": 0, 00:19:19.454 "r_mbytes_per_sec": 0, 00:19:19.454 "w_mbytes_per_sec": 0 00:19:19.454 }, 00:19:19.454 "claimed": true, 00:19:19.454 "claim_type": "exclusive_write", 00:19:19.454 "zoned": false, 00:19:19.454 "supported_io_types": { 00:19:19.454 "read": true, 00:19:19.454 "write": true, 00:19:19.454 "unmap": true, 00:19:19.454 "flush": true, 00:19:19.454 "reset": true, 00:19:19.454 "nvme_admin": false, 00:19:19.454 "nvme_io": false, 00:19:19.454 "nvme_io_md": false, 00:19:19.454 "write_zeroes": true, 00:19:19.454 "zcopy": true, 00:19:19.454 "get_zone_info": false, 00:19:19.454 "zone_management": false, 00:19:19.454 "zone_append": false, 00:19:19.454 "compare": false, 00:19:19.454 "compare_and_write": false, 00:19:19.454 "abort": true, 00:19:19.454 "seek_hole": false, 00:19:19.454 "seek_data": false, 00:19:19.454 "copy": true, 00:19:19.454 "nvme_iov_md": false 00:19:19.454 }, 00:19:19.454 "memory_domains": [ 00:19:19.454 { 00:19:19.454 "dma_device_id": "system", 00:19:19.454 "dma_device_type": 1 00:19:19.454 }, 00:19:19.454 { 00:19:19.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.454 "dma_device_type": 2 00:19:19.454 } 00:19:19.454 ], 00:19:19.454 "driver_specific": {} 00:19:19.454 } 00:19:19.454 ] 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.454 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.713 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.713 "name": "Existed_Raid", 00:19:19.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.713 "strip_size_kb": 0, 00:19:19.713 "state": "configuring", 00:19:19.713 "raid_level": "raid1", 00:19:19.713 "superblock": false, 00:19:19.713 "num_base_bdevs": 3, 00:19:19.713 "num_base_bdevs_discovered": 1, 00:19:19.713 "num_base_bdevs_operational": 3, 00:19:19.713 "base_bdevs_list": [ 00:19:19.713 { 00:19:19.713 "name": "BaseBdev1", 00:19:19.713 "uuid": "da120cb4-2c5a-4a4c-91a2-9be7f7730abb", 00:19:19.713 "is_configured": true, 00:19:19.713 "data_offset": 0, 00:19:19.713 "data_size": 65536 00:19:19.713 }, 00:19:19.713 { 00:19:19.713 "name": "BaseBdev2", 00:19:19.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.713 "is_configured": false, 00:19:19.713 "data_offset": 0, 00:19:19.713 "data_size": 0 00:19:19.713 }, 00:19:19.713 { 00:19:19.713 "name": "BaseBdev3", 00:19:19.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.713 "is_configured": false, 00:19:19.713 "data_offset": 0, 00:19:19.713 "data_size": 0 00:19:19.713 } 00:19:19.713 ] 00:19:19.713 }' 00:19:19.713 02:25:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.713 02:25:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.281 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:20.540 [2024-07-11 02:25:10.781316] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:20.540 [2024-07-11 02:25:10.781355] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1890ed0 name Existed_Raid, state configuring 00:19:20.540 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:20.540 [2024-07-11 02:25:10.961831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.540 [2024-07-11 02:25:10.963323] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.540 [2024-07-11 02:25:10.963354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.540 [2024-07-11 02:25:10.963364] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:20.540 [2024-07-11 02:25:10.963376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.797 02:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.055 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.055 "name": "Existed_Raid", 00:19:21.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.055 "strip_size_kb": 0, 00:19:21.055 "state": "configuring", 00:19:21.055 "raid_level": "raid1", 00:19:21.055 "superblock": false, 00:19:21.055 "num_base_bdevs": 3, 00:19:21.055 "num_base_bdevs_discovered": 1, 00:19:21.055 "num_base_bdevs_operational": 3, 00:19:21.055 "base_bdevs_list": [ 00:19:21.055 { 00:19:21.055 "name": "BaseBdev1", 00:19:21.055 "uuid": "da120cb4-2c5a-4a4c-91a2-9be7f7730abb", 00:19:21.055 "is_configured": true, 00:19:21.055 "data_offset": 0, 00:19:21.055 "data_size": 65536 00:19:21.055 }, 00:19:21.055 { 00:19:21.055 "name": "BaseBdev2", 00:19:21.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.055 "is_configured": false, 00:19:21.055 "data_offset": 0, 00:19:21.055 "data_size": 0 00:19:21.055 }, 00:19:21.055 { 00:19:21.055 "name": "BaseBdev3", 00:19:21.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.055 "is_configured": false, 00:19:21.055 "data_offset": 0, 00:19:21.055 "data_size": 0 00:19:21.055 } 00:19:21.055 ] 00:19:21.055 }' 00:19:21.055 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.055 02:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.620 02:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:22.187 [2024-07-11 02:25:12.356782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.187 BaseBdev2 00:19:22.187 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:22.187 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:22.187 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:22.187 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:22.187 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:22.187 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:22.187 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.445 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:22.445 [ 00:19:22.445 { 00:19:22.445 "name": "BaseBdev2", 00:19:22.445 "aliases": [ 00:19:22.445 "5d27e9a5-a9f3-4b86-806f-afb01288ef7d" 00:19:22.445 ], 00:19:22.445 "product_name": "Malloc disk", 00:19:22.445 "block_size": 512, 00:19:22.445 "num_blocks": 65536, 00:19:22.445 "uuid": "5d27e9a5-a9f3-4b86-806f-afb01288ef7d", 00:19:22.445 "assigned_rate_limits": { 00:19:22.445 "rw_ios_per_sec": 0, 00:19:22.445 "rw_mbytes_per_sec": 0, 00:19:22.445 "r_mbytes_per_sec": 0, 00:19:22.445 "w_mbytes_per_sec": 0 00:19:22.445 }, 00:19:22.445 "claimed": true, 00:19:22.445 "claim_type": "exclusive_write", 00:19:22.445 "zoned": false, 00:19:22.445 "supported_io_types": { 00:19:22.445 "read": true, 00:19:22.445 "write": true, 00:19:22.445 "unmap": true, 00:19:22.445 "flush": true, 00:19:22.445 "reset": true, 00:19:22.445 "nvme_admin": false, 00:19:22.445 "nvme_io": false, 00:19:22.445 "nvme_io_md": false, 00:19:22.445 "write_zeroes": true, 00:19:22.445 "zcopy": true, 00:19:22.445 "get_zone_info": false, 00:19:22.445 "zone_management": false, 00:19:22.445 "zone_append": false, 00:19:22.445 "compare": false, 00:19:22.445 "compare_and_write": false, 00:19:22.445 "abort": true, 00:19:22.445 "seek_hole": false, 00:19:22.445 "seek_data": false, 00:19:22.445 "copy": true, 00:19:22.445 "nvme_iov_md": false 00:19:22.445 }, 00:19:22.445 "memory_domains": [ 00:19:22.445 { 00:19:22.445 "dma_device_id": "system", 00:19:22.445 "dma_device_type": 1 00:19:22.445 }, 00:19:22.445 { 00:19:22.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.445 "dma_device_type": 2 00:19:22.445 } 00:19:22.445 ], 00:19:22.445 "driver_specific": {} 00:19:22.445 } 00:19:22.445 ] 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.712 02:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.969 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.969 "name": "Existed_Raid", 00:19:22.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.969 "strip_size_kb": 0, 00:19:22.969 "state": "configuring", 00:19:22.969 "raid_level": "raid1", 00:19:22.969 "superblock": false, 00:19:22.969 "num_base_bdevs": 3, 00:19:22.969 "num_base_bdevs_discovered": 2, 00:19:22.969 "num_base_bdevs_operational": 3, 00:19:22.969 "base_bdevs_list": [ 00:19:22.969 { 00:19:22.969 "name": "BaseBdev1", 00:19:22.969 "uuid": "da120cb4-2c5a-4a4c-91a2-9be7f7730abb", 00:19:22.969 "is_configured": true, 00:19:22.969 "data_offset": 0, 00:19:22.969 "data_size": 65536 00:19:22.969 }, 00:19:22.969 { 00:19:22.969 "name": "BaseBdev2", 00:19:22.969 "uuid": "5d27e9a5-a9f3-4b86-806f-afb01288ef7d", 00:19:22.969 "is_configured": true, 00:19:22.969 "data_offset": 0, 00:19:22.969 "data_size": 65536 00:19:22.969 }, 00:19:22.969 { 00:19:22.969 "name": "BaseBdev3", 00:19:22.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.969 "is_configured": false, 00:19:22.969 "data_offset": 0, 00:19:22.969 "data_size": 0 00:19:22.969 } 00:19:22.969 ] 00:19:22.969 }' 00:19:22.969 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.969 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.903 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:24.161 [2024-07-11 02:25:14.350546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:24.161 [2024-07-11 02:25:14.350584] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a43cf0 00:19:24.161 [2024-07-11 02:25:14.350593] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:24.161 [2024-07-11 02:25:14.350858] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1895c90 00:19:24.161 [2024-07-11 02:25:14.350985] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a43cf0 00:19:24.161 [2024-07-11 02:25:14.350995] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a43cf0 00:19:24.161 [2024-07-11 02:25:14.351158] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.161 BaseBdev3 00:19:24.161 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:24.161 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:24.161 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:24.161 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:24.161 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:24.161 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:24.161 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.420 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:24.679 [ 00:19:24.679 { 00:19:24.679 "name": "BaseBdev3", 00:19:24.679 "aliases": [ 00:19:24.679 "8e5a4c42-5499-4504-ab24-2c22226122d3" 00:19:24.679 ], 00:19:24.679 "product_name": "Malloc disk", 00:19:24.679 "block_size": 512, 00:19:24.679 "num_blocks": 65536, 00:19:24.679 "uuid": "8e5a4c42-5499-4504-ab24-2c22226122d3", 00:19:24.679 "assigned_rate_limits": { 00:19:24.679 "rw_ios_per_sec": 0, 00:19:24.679 "rw_mbytes_per_sec": 0, 00:19:24.679 "r_mbytes_per_sec": 0, 00:19:24.679 "w_mbytes_per_sec": 0 00:19:24.679 }, 00:19:24.679 "claimed": true, 00:19:24.679 "claim_type": "exclusive_write", 00:19:24.679 "zoned": false, 00:19:24.679 "supported_io_types": { 00:19:24.679 "read": true, 00:19:24.679 "write": true, 00:19:24.679 "unmap": true, 00:19:24.679 "flush": true, 00:19:24.679 "reset": true, 00:19:24.679 "nvme_admin": false, 00:19:24.679 "nvme_io": false, 00:19:24.679 "nvme_io_md": false, 00:19:24.679 "write_zeroes": true, 00:19:24.679 "zcopy": true, 00:19:24.679 "get_zone_info": false, 00:19:24.679 "zone_management": false, 00:19:24.679 "zone_append": false, 00:19:24.679 "compare": false, 00:19:24.679 "compare_and_write": false, 00:19:24.679 "abort": true, 00:19:24.679 "seek_hole": false, 00:19:24.679 "seek_data": false, 00:19:24.679 "copy": true, 00:19:24.679 "nvme_iov_md": false 00:19:24.679 }, 00:19:24.679 "memory_domains": [ 00:19:24.679 { 00:19:24.679 "dma_device_id": "system", 00:19:24.679 "dma_device_type": 1 00:19:24.679 }, 00:19:24.679 { 00:19:24.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.679 "dma_device_type": 2 00:19:24.679 } 00:19:24.679 ], 00:19:24.679 "driver_specific": {} 00:19:24.679 } 00:19:24.680 ] 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.680 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.948 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.948 "name": "Existed_Raid", 00:19:24.948 "uuid": "b5d01b42-970d-499b-b78d-b60844689063", 00:19:24.948 "strip_size_kb": 0, 00:19:24.948 "state": "online", 00:19:24.948 "raid_level": "raid1", 00:19:24.948 "superblock": false, 00:19:24.948 "num_base_bdevs": 3, 00:19:24.948 "num_base_bdevs_discovered": 3, 00:19:24.948 "num_base_bdevs_operational": 3, 00:19:24.948 "base_bdevs_list": [ 00:19:24.948 { 00:19:24.948 "name": "BaseBdev1", 00:19:24.948 "uuid": "da120cb4-2c5a-4a4c-91a2-9be7f7730abb", 00:19:24.948 "is_configured": true, 00:19:24.948 "data_offset": 0, 00:19:24.948 "data_size": 65536 00:19:24.948 }, 00:19:24.948 { 00:19:24.948 "name": "BaseBdev2", 00:19:24.948 "uuid": "5d27e9a5-a9f3-4b86-806f-afb01288ef7d", 00:19:24.948 "is_configured": true, 00:19:24.948 "data_offset": 0, 00:19:24.948 "data_size": 65536 00:19:24.948 }, 00:19:24.948 { 00:19:24.948 "name": "BaseBdev3", 00:19:24.948 "uuid": "8e5a4c42-5499-4504-ab24-2c22226122d3", 00:19:24.948 "is_configured": true, 00:19:24.948 "data_offset": 0, 00:19:24.948 "data_size": 65536 00:19:24.948 } 00:19:24.948 ] 00:19:24.948 }' 00:19:24.948 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.948 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:25.621 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:25.881 [2024-07-11 02:25:16.227863] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.881 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:25.881 "name": "Existed_Raid", 00:19:25.881 "aliases": [ 00:19:25.881 "b5d01b42-970d-499b-b78d-b60844689063" 00:19:25.881 ], 00:19:25.881 "product_name": "Raid Volume", 00:19:25.881 "block_size": 512, 00:19:25.881 "num_blocks": 65536, 00:19:25.881 "uuid": "b5d01b42-970d-499b-b78d-b60844689063", 00:19:25.881 "assigned_rate_limits": { 00:19:25.881 "rw_ios_per_sec": 0, 00:19:25.881 "rw_mbytes_per_sec": 0, 00:19:25.881 "r_mbytes_per_sec": 0, 00:19:25.881 "w_mbytes_per_sec": 0 00:19:25.881 }, 00:19:25.881 "claimed": false, 00:19:25.881 "zoned": false, 00:19:25.881 "supported_io_types": { 00:19:25.881 "read": true, 00:19:25.881 "write": true, 00:19:25.881 "unmap": false, 00:19:25.881 "flush": false, 00:19:25.881 "reset": true, 00:19:25.881 "nvme_admin": false, 00:19:25.881 "nvme_io": false, 00:19:25.881 "nvme_io_md": false, 00:19:25.881 "write_zeroes": true, 00:19:25.881 "zcopy": false, 00:19:25.881 "get_zone_info": false, 00:19:25.881 "zone_management": false, 00:19:25.881 "zone_append": false, 00:19:25.881 "compare": false, 00:19:25.881 "compare_and_write": false, 00:19:25.881 "abort": false, 00:19:25.881 "seek_hole": false, 00:19:25.881 "seek_data": false, 00:19:25.881 "copy": false, 00:19:25.881 "nvme_iov_md": false 00:19:25.881 }, 00:19:25.881 "memory_domains": [ 00:19:25.881 { 00:19:25.881 "dma_device_id": "system", 00:19:25.881 "dma_device_type": 1 00:19:25.881 }, 00:19:25.881 { 00:19:25.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.881 "dma_device_type": 2 00:19:25.881 }, 00:19:25.881 { 00:19:25.881 "dma_device_id": "system", 00:19:25.881 "dma_device_type": 1 00:19:25.881 }, 00:19:25.881 { 00:19:25.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.881 "dma_device_type": 2 00:19:25.881 }, 00:19:25.881 { 00:19:25.881 "dma_device_id": "system", 00:19:25.881 "dma_device_type": 1 00:19:25.881 }, 00:19:25.881 { 00:19:25.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.881 "dma_device_type": 2 00:19:25.881 } 00:19:25.881 ], 00:19:25.881 "driver_specific": { 00:19:25.881 "raid": { 00:19:25.881 "uuid": "b5d01b42-970d-499b-b78d-b60844689063", 00:19:25.881 "strip_size_kb": 0, 00:19:25.881 "state": "online", 00:19:25.881 "raid_level": "raid1", 00:19:25.881 "superblock": false, 00:19:25.881 "num_base_bdevs": 3, 00:19:25.881 "num_base_bdevs_discovered": 3, 00:19:25.881 "num_base_bdevs_operational": 3, 00:19:25.881 "base_bdevs_list": [ 00:19:25.881 { 00:19:25.881 "name": "BaseBdev1", 00:19:25.881 "uuid": "da120cb4-2c5a-4a4c-91a2-9be7f7730abb", 00:19:25.881 "is_configured": true, 00:19:25.881 "data_offset": 0, 00:19:25.881 "data_size": 65536 00:19:25.881 }, 00:19:25.881 { 00:19:25.881 "name": "BaseBdev2", 00:19:25.881 "uuid": "5d27e9a5-a9f3-4b86-806f-afb01288ef7d", 00:19:25.881 "is_configured": true, 00:19:25.881 "data_offset": 0, 00:19:25.881 "data_size": 65536 00:19:25.881 }, 00:19:25.881 { 00:19:25.881 "name": "BaseBdev3", 00:19:25.881 "uuid": "8e5a4c42-5499-4504-ab24-2c22226122d3", 00:19:25.881 "is_configured": true, 00:19:25.881 "data_offset": 0, 00:19:25.881 "data_size": 65536 00:19:25.881 } 00:19:25.881 ] 00:19:25.881 } 00:19:25.881 } 00:19:25.881 }' 00:19:25.881 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:25.881 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:25.881 BaseBdev2 00:19:25.881 BaseBdev3' 00:19:25.881 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.140 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:26.140 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.140 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.140 "name": "BaseBdev1", 00:19:26.140 "aliases": [ 00:19:26.140 "da120cb4-2c5a-4a4c-91a2-9be7f7730abb" 00:19:26.140 ], 00:19:26.140 "product_name": "Malloc disk", 00:19:26.140 "block_size": 512, 00:19:26.140 "num_blocks": 65536, 00:19:26.140 "uuid": "da120cb4-2c5a-4a4c-91a2-9be7f7730abb", 00:19:26.140 "assigned_rate_limits": { 00:19:26.140 "rw_ios_per_sec": 0, 00:19:26.140 "rw_mbytes_per_sec": 0, 00:19:26.140 "r_mbytes_per_sec": 0, 00:19:26.140 "w_mbytes_per_sec": 0 00:19:26.140 }, 00:19:26.140 "claimed": true, 00:19:26.140 "claim_type": "exclusive_write", 00:19:26.140 "zoned": false, 00:19:26.140 "supported_io_types": { 00:19:26.140 "read": true, 00:19:26.140 "write": true, 00:19:26.140 "unmap": true, 00:19:26.140 "flush": true, 00:19:26.140 "reset": true, 00:19:26.140 "nvme_admin": false, 00:19:26.140 "nvme_io": false, 00:19:26.140 "nvme_io_md": false, 00:19:26.140 "write_zeroes": true, 00:19:26.140 "zcopy": true, 00:19:26.140 "get_zone_info": false, 00:19:26.140 "zone_management": false, 00:19:26.140 "zone_append": false, 00:19:26.140 "compare": false, 00:19:26.140 "compare_and_write": false, 00:19:26.140 "abort": true, 00:19:26.140 "seek_hole": false, 00:19:26.140 "seek_data": false, 00:19:26.140 "copy": true, 00:19:26.140 "nvme_iov_md": false 00:19:26.140 }, 00:19:26.140 "memory_domains": [ 00:19:26.140 { 00:19:26.140 "dma_device_id": "system", 00:19:26.140 "dma_device_type": 1 00:19:26.140 }, 00:19:26.140 { 00:19:26.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.140 "dma_device_type": 2 00:19:26.140 } 00:19:26.140 ], 00:19:26.140 "driver_specific": {} 00:19:26.140 }' 00:19:26.140 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.399 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.399 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:26.399 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.399 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.399 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.399 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.399 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.658 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.658 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.658 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.658 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.658 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.658 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:26.658 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.917 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.917 "name": "BaseBdev2", 00:19:26.917 "aliases": [ 00:19:26.917 "5d27e9a5-a9f3-4b86-806f-afb01288ef7d" 00:19:26.917 ], 00:19:26.917 "product_name": "Malloc disk", 00:19:26.917 "block_size": 512, 00:19:26.917 "num_blocks": 65536, 00:19:26.917 "uuid": "5d27e9a5-a9f3-4b86-806f-afb01288ef7d", 00:19:26.917 "assigned_rate_limits": { 00:19:26.917 "rw_ios_per_sec": 0, 00:19:26.917 "rw_mbytes_per_sec": 0, 00:19:26.917 "r_mbytes_per_sec": 0, 00:19:26.917 "w_mbytes_per_sec": 0 00:19:26.917 }, 00:19:26.917 "claimed": true, 00:19:26.917 "claim_type": "exclusive_write", 00:19:26.917 "zoned": false, 00:19:26.917 "supported_io_types": { 00:19:26.917 "read": true, 00:19:26.917 "write": true, 00:19:26.917 "unmap": true, 00:19:26.917 "flush": true, 00:19:26.917 "reset": true, 00:19:26.917 "nvme_admin": false, 00:19:26.917 "nvme_io": false, 00:19:26.917 "nvme_io_md": false, 00:19:26.917 "write_zeroes": true, 00:19:26.917 "zcopy": true, 00:19:26.917 "get_zone_info": false, 00:19:26.917 "zone_management": false, 00:19:26.917 "zone_append": false, 00:19:26.917 "compare": false, 00:19:26.917 "compare_and_write": false, 00:19:26.917 "abort": true, 00:19:26.917 "seek_hole": false, 00:19:26.917 "seek_data": false, 00:19:26.918 "copy": true, 00:19:26.918 "nvme_iov_md": false 00:19:26.918 }, 00:19:26.918 "memory_domains": [ 00:19:26.918 { 00:19:26.918 "dma_device_id": "system", 00:19:26.918 "dma_device_type": 1 00:19:26.918 }, 00:19:26.918 { 00:19:26.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.918 "dma_device_type": 2 00:19:26.918 } 00:19:26.918 ], 00:19:26.918 "driver_specific": {} 00:19:26.918 }' 00:19:26.918 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.918 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.177 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.436 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.436 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.436 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:27.436 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:27.436 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:27.695 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:27.695 "name": "BaseBdev3", 00:19:27.695 "aliases": [ 00:19:27.695 "8e5a4c42-5499-4504-ab24-2c22226122d3" 00:19:27.695 ], 00:19:27.695 "product_name": "Malloc disk", 00:19:27.695 "block_size": 512, 00:19:27.695 "num_blocks": 65536, 00:19:27.695 "uuid": "8e5a4c42-5499-4504-ab24-2c22226122d3", 00:19:27.695 "assigned_rate_limits": { 00:19:27.695 "rw_ios_per_sec": 0, 00:19:27.695 "rw_mbytes_per_sec": 0, 00:19:27.695 "r_mbytes_per_sec": 0, 00:19:27.695 "w_mbytes_per_sec": 0 00:19:27.695 }, 00:19:27.695 "claimed": true, 00:19:27.695 "claim_type": "exclusive_write", 00:19:27.695 "zoned": false, 00:19:27.695 "supported_io_types": { 00:19:27.695 "read": true, 00:19:27.695 "write": true, 00:19:27.695 "unmap": true, 00:19:27.695 "flush": true, 00:19:27.695 "reset": true, 00:19:27.695 "nvme_admin": false, 00:19:27.695 "nvme_io": false, 00:19:27.695 "nvme_io_md": false, 00:19:27.695 "write_zeroes": true, 00:19:27.695 "zcopy": true, 00:19:27.695 "get_zone_info": false, 00:19:27.695 "zone_management": false, 00:19:27.695 "zone_append": false, 00:19:27.695 "compare": false, 00:19:27.695 "compare_and_write": false, 00:19:27.695 "abort": true, 00:19:27.695 "seek_hole": false, 00:19:27.695 "seek_data": false, 00:19:27.695 "copy": true, 00:19:27.695 "nvme_iov_md": false 00:19:27.695 }, 00:19:27.695 "memory_domains": [ 00:19:27.695 { 00:19:27.695 "dma_device_id": "system", 00:19:27.695 "dma_device_type": 1 00:19:27.695 }, 00:19:27.695 { 00:19:27.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.695 "dma_device_type": 2 00:19:27.695 } 00:19:27.695 ], 00:19:27.695 "driver_specific": {} 00:19:27.695 }' 00:19:27.695 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.695 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.695 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:27.695 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.954 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.954 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:27.954 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.954 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.954 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.954 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.954 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.213 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.213 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:28.213 [2024-07-11 02:25:18.613936] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:28.213 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:28.213 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.472 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.472 "name": "Existed_Raid", 00:19:28.472 "uuid": "b5d01b42-970d-499b-b78d-b60844689063", 00:19:28.472 "strip_size_kb": 0, 00:19:28.472 "state": "online", 00:19:28.472 "raid_level": "raid1", 00:19:28.472 "superblock": false, 00:19:28.472 "num_base_bdevs": 3, 00:19:28.472 "num_base_bdevs_discovered": 2, 00:19:28.472 "num_base_bdevs_operational": 2, 00:19:28.472 "base_bdevs_list": [ 00:19:28.472 { 00:19:28.472 "name": null, 00:19:28.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.472 "is_configured": false, 00:19:28.472 "data_offset": 0, 00:19:28.472 "data_size": 65536 00:19:28.472 }, 00:19:28.472 { 00:19:28.472 "name": "BaseBdev2", 00:19:28.473 "uuid": "5d27e9a5-a9f3-4b86-806f-afb01288ef7d", 00:19:28.473 "is_configured": true, 00:19:28.473 "data_offset": 0, 00:19:28.473 "data_size": 65536 00:19:28.473 }, 00:19:28.473 { 00:19:28.473 "name": "BaseBdev3", 00:19:28.473 "uuid": "8e5a4c42-5499-4504-ab24-2c22226122d3", 00:19:28.473 "is_configured": true, 00:19:28.473 "data_offset": 0, 00:19:28.473 "data_size": 65536 00:19:28.473 } 00:19:28.473 ] 00:19:28.473 }' 00:19:28.473 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.473 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.410 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:29.410 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:29.410 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.410 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:29.675 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:29.675 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.675 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:29.675 [2024-07-11 02:25:20.063638] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.675 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:29.675 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:29.675 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.675 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:30.243 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:30.243 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.243 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:30.502 [2024-07-11 02:25:20.840801] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:30.502 [2024-07-11 02:25:20.840878] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.502 [2024-07-11 02:25:20.851300] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.502 [2024-07-11 02:25:20.851331] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.502 [2024-07-11 02:25:20.851342] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a43cf0 name Existed_Raid, state offline 00:19:30.502 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:30.502 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:30.502 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.502 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:30.762 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:30.762 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:30.762 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:30.762 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:30.762 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:30.762 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:31.021 BaseBdev2 00:19:31.021 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:31.021 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:31.021 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:31.021 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:31.021 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:31.021 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:31.021 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.286 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:31.546 [ 00:19:31.546 { 00:19:31.546 "name": "BaseBdev2", 00:19:31.546 "aliases": [ 00:19:31.546 "a526f3dc-d733-4922-b299-4802c54d0842" 00:19:31.546 ], 00:19:31.546 "product_name": "Malloc disk", 00:19:31.546 "block_size": 512, 00:19:31.546 "num_blocks": 65536, 00:19:31.546 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:31.546 "assigned_rate_limits": { 00:19:31.546 "rw_ios_per_sec": 0, 00:19:31.546 "rw_mbytes_per_sec": 0, 00:19:31.546 "r_mbytes_per_sec": 0, 00:19:31.546 "w_mbytes_per_sec": 0 00:19:31.546 }, 00:19:31.546 "claimed": false, 00:19:31.546 "zoned": false, 00:19:31.546 "supported_io_types": { 00:19:31.546 "read": true, 00:19:31.546 "write": true, 00:19:31.546 "unmap": true, 00:19:31.546 "flush": true, 00:19:31.546 "reset": true, 00:19:31.546 "nvme_admin": false, 00:19:31.546 "nvme_io": false, 00:19:31.546 "nvme_io_md": false, 00:19:31.546 "write_zeroes": true, 00:19:31.546 "zcopy": true, 00:19:31.546 "get_zone_info": false, 00:19:31.546 "zone_management": false, 00:19:31.546 "zone_append": false, 00:19:31.546 "compare": false, 00:19:31.546 "compare_and_write": false, 00:19:31.546 "abort": true, 00:19:31.546 "seek_hole": false, 00:19:31.546 "seek_data": false, 00:19:31.546 "copy": true, 00:19:31.546 "nvme_iov_md": false 00:19:31.546 }, 00:19:31.546 "memory_domains": [ 00:19:31.546 { 00:19:31.546 "dma_device_id": "system", 00:19:31.546 "dma_device_type": 1 00:19:31.546 }, 00:19:31.546 { 00:19:31.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.546 "dma_device_type": 2 00:19:31.546 } 00:19:31.546 ], 00:19:31.546 "driver_specific": {} 00:19:31.546 } 00:19:31.546 ] 00:19:31.546 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:31.546 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:31.546 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:31.546 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.805 BaseBdev3 00:19:31.805 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:31.805 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:31.805 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:31.805 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:31.805 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:31.805 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:31.805 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:32.064 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:32.323 [ 00:19:32.323 { 00:19:32.323 "name": "BaseBdev3", 00:19:32.323 "aliases": [ 00:19:32.323 "becb80be-de76-43fb-aa09-fd9ceabf3d4e" 00:19:32.323 ], 00:19:32.323 "product_name": "Malloc disk", 00:19:32.323 "block_size": 512, 00:19:32.323 "num_blocks": 65536, 00:19:32.323 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:32.323 "assigned_rate_limits": { 00:19:32.323 "rw_ios_per_sec": 0, 00:19:32.323 "rw_mbytes_per_sec": 0, 00:19:32.323 "r_mbytes_per_sec": 0, 00:19:32.323 "w_mbytes_per_sec": 0 00:19:32.323 }, 00:19:32.323 "claimed": false, 00:19:32.323 "zoned": false, 00:19:32.323 "supported_io_types": { 00:19:32.323 "read": true, 00:19:32.323 "write": true, 00:19:32.323 "unmap": true, 00:19:32.323 "flush": true, 00:19:32.323 "reset": true, 00:19:32.323 "nvme_admin": false, 00:19:32.323 "nvme_io": false, 00:19:32.323 "nvme_io_md": false, 00:19:32.323 "write_zeroes": true, 00:19:32.323 "zcopy": true, 00:19:32.323 "get_zone_info": false, 00:19:32.323 "zone_management": false, 00:19:32.323 "zone_append": false, 00:19:32.323 "compare": false, 00:19:32.323 "compare_and_write": false, 00:19:32.323 "abort": true, 00:19:32.323 "seek_hole": false, 00:19:32.323 "seek_data": false, 00:19:32.323 "copy": true, 00:19:32.323 "nvme_iov_md": false 00:19:32.324 }, 00:19:32.324 "memory_domains": [ 00:19:32.324 { 00:19:32.324 "dma_device_id": "system", 00:19:32.324 "dma_device_type": 1 00:19:32.324 }, 00:19:32.324 { 00:19:32.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.324 "dma_device_type": 2 00:19:32.324 } 00:19:32.324 ], 00:19:32.324 "driver_specific": {} 00:19:32.324 } 00:19:32.324 ] 00:19:32.324 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:32.324 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:32.324 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:32.324 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:32.583 [2024-07-11 02:25:22.818051] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:32.583 [2024-07-11 02:25:22.818090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:32.583 [2024-07-11 02:25:22.818111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.583 [2024-07-11 02:25:22.819397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.842 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.842 "name": "Existed_Raid", 00:19:32.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.842 "strip_size_kb": 0, 00:19:32.842 "state": "configuring", 00:19:32.842 "raid_level": "raid1", 00:19:32.842 "superblock": false, 00:19:32.842 "num_base_bdevs": 3, 00:19:32.842 "num_base_bdevs_discovered": 2, 00:19:32.842 "num_base_bdevs_operational": 3, 00:19:32.842 "base_bdevs_list": [ 00:19:32.842 { 00:19:32.842 "name": "BaseBdev1", 00:19:32.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.842 "is_configured": false, 00:19:32.842 "data_offset": 0, 00:19:32.842 "data_size": 0 00:19:32.842 }, 00:19:32.842 { 00:19:32.842 "name": "BaseBdev2", 00:19:32.842 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:32.842 "is_configured": true, 00:19:32.842 "data_offset": 0, 00:19:32.842 "data_size": 65536 00:19:32.842 }, 00:19:32.842 { 00:19:32.842 "name": "BaseBdev3", 00:19:32.842 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:32.842 "is_configured": true, 00:19:32.842 "data_offset": 0, 00:19:32.842 "data_size": 65536 00:19:32.842 } 00:19:32.842 ] 00:19:32.842 }' 00:19:32.842 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.842 02:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.408 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:33.666 [2024-07-11 02:25:23.965083] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.666 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.926 02:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.926 "name": "Existed_Raid", 00:19:33.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.926 "strip_size_kb": 0, 00:19:33.926 "state": "configuring", 00:19:33.926 "raid_level": "raid1", 00:19:33.926 "superblock": false, 00:19:33.926 "num_base_bdevs": 3, 00:19:33.926 "num_base_bdevs_discovered": 1, 00:19:33.926 "num_base_bdevs_operational": 3, 00:19:33.926 "base_bdevs_list": [ 00:19:33.926 { 00:19:33.926 "name": "BaseBdev1", 00:19:33.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.926 "is_configured": false, 00:19:33.926 "data_offset": 0, 00:19:33.926 "data_size": 0 00:19:33.926 }, 00:19:33.926 { 00:19:33.926 "name": null, 00:19:33.926 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:33.926 "is_configured": false, 00:19:33.926 "data_offset": 0, 00:19:33.926 "data_size": 65536 00:19:33.926 }, 00:19:33.926 { 00:19:33.926 "name": "BaseBdev3", 00:19:33.926 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:33.926 "is_configured": true, 00:19:33.926 "data_offset": 0, 00:19:33.926 "data_size": 65536 00:19:33.926 } 00:19:33.926 ] 00:19:33.926 }' 00:19:33.926 02:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.926 02:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.863 02:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.863 02:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:35.123 02:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:35.123 02:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:35.382 [2024-07-11 02:25:25.617727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.382 BaseBdev1 00:19:35.382 02:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:35.382 02:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:35.382 02:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:35.382 02:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:35.382 02:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:35.382 02:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:35.382 02:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.641 02:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:35.900 [ 00:19:35.900 { 00:19:35.900 "name": "BaseBdev1", 00:19:35.900 "aliases": [ 00:19:35.900 "637bccbd-fe18-4720-8185-b0eef49abd0a" 00:19:35.901 ], 00:19:35.901 "product_name": "Malloc disk", 00:19:35.901 "block_size": 512, 00:19:35.901 "num_blocks": 65536, 00:19:35.901 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:35.901 "assigned_rate_limits": { 00:19:35.901 "rw_ios_per_sec": 0, 00:19:35.901 "rw_mbytes_per_sec": 0, 00:19:35.901 "r_mbytes_per_sec": 0, 00:19:35.901 "w_mbytes_per_sec": 0 00:19:35.901 }, 00:19:35.901 "claimed": true, 00:19:35.901 "claim_type": "exclusive_write", 00:19:35.901 "zoned": false, 00:19:35.901 "supported_io_types": { 00:19:35.901 "read": true, 00:19:35.901 "write": true, 00:19:35.901 "unmap": true, 00:19:35.901 "flush": true, 00:19:35.901 "reset": true, 00:19:35.901 "nvme_admin": false, 00:19:35.901 "nvme_io": false, 00:19:35.901 "nvme_io_md": false, 00:19:35.901 "write_zeroes": true, 00:19:35.901 "zcopy": true, 00:19:35.901 "get_zone_info": false, 00:19:35.901 "zone_management": false, 00:19:35.901 "zone_append": false, 00:19:35.901 "compare": false, 00:19:35.901 "compare_and_write": false, 00:19:35.901 "abort": true, 00:19:35.901 "seek_hole": false, 00:19:35.901 "seek_data": false, 00:19:35.901 "copy": true, 00:19:35.901 "nvme_iov_md": false 00:19:35.901 }, 00:19:35.901 "memory_domains": [ 00:19:35.901 { 00:19:35.901 "dma_device_id": "system", 00:19:35.901 "dma_device_type": 1 00:19:35.901 }, 00:19:35.901 { 00:19:35.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.901 "dma_device_type": 2 00:19:35.901 } 00:19:35.901 ], 00:19:35.901 "driver_specific": {} 00:19:35.901 } 00:19:35.901 ] 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.901 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.160 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.160 "name": "Existed_Raid", 00:19:36.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.160 "strip_size_kb": 0, 00:19:36.160 "state": "configuring", 00:19:36.160 "raid_level": "raid1", 00:19:36.160 "superblock": false, 00:19:36.160 "num_base_bdevs": 3, 00:19:36.160 "num_base_bdevs_discovered": 2, 00:19:36.160 "num_base_bdevs_operational": 3, 00:19:36.160 "base_bdevs_list": [ 00:19:36.160 { 00:19:36.160 "name": "BaseBdev1", 00:19:36.160 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:36.160 "is_configured": true, 00:19:36.160 "data_offset": 0, 00:19:36.160 "data_size": 65536 00:19:36.160 }, 00:19:36.160 { 00:19:36.160 "name": null, 00:19:36.160 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:36.160 "is_configured": false, 00:19:36.160 "data_offset": 0, 00:19:36.160 "data_size": 65536 00:19:36.160 }, 00:19:36.160 { 00:19:36.160 "name": "BaseBdev3", 00:19:36.160 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:36.160 "is_configured": true, 00:19:36.160 "data_offset": 0, 00:19:36.160 "data_size": 65536 00:19:36.160 } 00:19:36.160 ] 00:19:36.160 }' 00:19:36.160 02:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.160 02:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.726 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.726 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:36.983 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:36.983 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:37.242 [2024-07-11 02:25:27.494751] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.242 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.501 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.501 "name": "Existed_Raid", 00:19:37.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.501 "strip_size_kb": 0, 00:19:37.501 "state": "configuring", 00:19:37.501 "raid_level": "raid1", 00:19:37.501 "superblock": false, 00:19:37.501 "num_base_bdevs": 3, 00:19:37.501 "num_base_bdevs_discovered": 1, 00:19:37.501 "num_base_bdevs_operational": 3, 00:19:37.501 "base_bdevs_list": [ 00:19:37.501 { 00:19:37.501 "name": "BaseBdev1", 00:19:37.501 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:37.501 "is_configured": true, 00:19:37.501 "data_offset": 0, 00:19:37.501 "data_size": 65536 00:19:37.501 }, 00:19:37.501 { 00:19:37.501 "name": null, 00:19:37.501 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:37.501 "is_configured": false, 00:19:37.501 "data_offset": 0, 00:19:37.501 "data_size": 65536 00:19:37.501 }, 00:19:37.501 { 00:19:37.501 "name": null, 00:19:37.501 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:37.501 "is_configured": false, 00:19:37.501 "data_offset": 0, 00:19:37.501 "data_size": 65536 00:19:37.501 } 00:19:37.501 ] 00:19:37.501 }' 00:19:37.501 02:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.501 02:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.067 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.067 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:38.325 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:38.325 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:38.585 [2024-07-11 02:25:28.854422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.585 02:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.844 02:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.844 "name": "Existed_Raid", 00:19:38.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.844 "strip_size_kb": 0, 00:19:38.844 "state": "configuring", 00:19:38.844 "raid_level": "raid1", 00:19:38.844 "superblock": false, 00:19:38.844 "num_base_bdevs": 3, 00:19:38.844 "num_base_bdevs_discovered": 2, 00:19:38.844 "num_base_bdevs_operational": 3, 00:19:38.844 "base_bdevs_list": [ 00:19:38.844 { 00:19:38.844 "name": "BaseBdev1", 00:19:38.844 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:38.844 "is_configured": true, 00:19:38.844 "data_offset": 0, 00:19:38.844 "data_size": 65536 00:19:38.844 }, 00:19:38.844 { 00:19:38.844 "name": null, 00:19:38.844 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:38.844 "is_configured": false, 00:19:38.844 "data_offset": 0, 00:19:38.844 "data_size": 65536 00:19:38.844 }, 00:19:38.844 { 00:19:38.844 "name": "BaseBdev3", 00:19:38.844 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:38.844 "is_configured": true, 00:19:38.844 "data_offset": 0, 00:19:38.844 "data_size": 65536 00:19:38.844 } 00:19:38.844 ] 00:19:38.844 }' 00:19:38.844 02:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.844 02:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.411 02:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.411 02:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:39.670 02:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:39.670 02:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:39.928 [2024-07-11 02:25:30.214049] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:39.928 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:39.928 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.928 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.929 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.187 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.187 "name": "Existed_Raid", 00:19:40.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.187 "strip_size_kb": 0, 00:19:40.187 "state": "configuring", 00:19:40.187 "raid_level": "raid1", 00:19:40.187 "superblock": false, 00:19:40.187 "num_base_bdevs": 3, 00:19:40.187 "num_base_bdevs_discovered": 1, 00:19:40.187 "num_base_bdevs_operational": 3, 00:19:40.187 "base_bdevs_list": [ 00:19:40.187 { 00:19:40.187 "name": null, 00:19:40.187 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:40.187 "is_configured": false, 00:19:40.187 "data_offset": 0, 00:19:40.187 "data_size": 65536 00:19:40.187 }, 00:19:40.187 { 00:19:40.187 "name": null, 00:19:40.187 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:40.187 "is_configured": false, 00:19:40.187 "data_offset": 0, 00:19:40.187 "data_size": 65536 00:19:40.187 }, 00:19:40.187 { 00:19:40.187 "name": "BaseBdev3", 00:19:40.187 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:40.188 "is_configured": true, 00:19:40.188 "data_offset": 0, 00:19:40.188 "data_size": 65536 00:19:40.188 } 00:19:40.188 ] 00:19:40.188 }' 00:19:40.188 02:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.188 02:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.755 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.755 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:41.013 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:41.013 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:41.272 [2024-07-11 02:25:31.551997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.272 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.531 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.531 "name": "Existed_Raid", 00:19:41.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.531 "strip_size_kb": 0, 00:19:41.531 "state": "configuring", 00:19:41.531 "raid_level": "raid1", 00:19:41.531 "superblock": false, 00:19:41.531 "num_base_bdevs": 3, 00:19:41.531 "num_base_bdevs_discovered": 2, 00:19:41.531 "num_base_bdevs_operational": 3, 00:19:41.531 "base_bdevs_list": [ 00:19:41.531 { 00:19:41.531 "name": null, 00:19:41.531 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:41.531 "is_configured": false, 00:19:41.531 "data_offset": 0, 00:19:41.531 "data_size": 65536 00:19:41.531 }, 00:19:41.531 { 00:19:41.531 "name": "BaseBdev2", 00:19:41.531 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:41.531 "is_configured": true, 00:19:41.531 "data_offset": 0, 00:19:41.531 "data_size": 65536 00:19:41.531 }, 00:19:41.531 { 00:19:41.531 "name": "BaseBdev3", 00:19:41.531 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:41.531 "is_configured": true, 00:19:41.531 "data_offset": 0, 00:19:41.531 "data_size": 65536 00:19:41.531 } 00:19:41.531 ] 00:19:41.531 }' 00:19:41.531 02:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.531 02:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.098 02:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:42.098 02:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.357 02:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:42.357 02:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.357 02:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:42.615 02:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 637bccbd-fe18-4720-8185-b0eef49abd0a 00:19:42.874 [2024-07-11 02:25:33.148669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:42.874 [2024-07-11 02:25:33.148709] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1892ff0 00:19:42.874 [2024-07-11 02:25:33.148718] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:42.874 [2024-07-11 02:25:33.148940] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1897230 00:19:42.874 [2024-07-11 02:25:33.149076] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1892ff0 00:19:42.874 [2024-07-11 02:25:33.149087] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1892ff0 00:19:42.874 [2024-07-11 02:25:33.149250] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.874 NewBaseBdev 00:19:42.874 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:42.874 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:42.874 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:42.874 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:42.874 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:42.874 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:42.874 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.133 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:43.391 [ 00:19:43.391 { 00:19:43.391 "name": "NewBaseBdev", 00:19:43.391 "aliases": [ 00:19:43.391 "637bccbd-fe18-4720-8185-b0eef49abd0a" 00:19:43.391 ], 00:19:43.391 "product_name": "Malloc disk", 00:19:43.391 "block_size": 512, 00:19:43.391 "num_blocks": 65536, 00:19:43.391 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:43.391 "assigned_rate_limits": { 00:19:43.391 "rw_ios_per_sec": 0, 00:19:43.391 "rw_mbytes_per_sec": 0, 00:19:43.391 "r_mbytes_per_sec": 0, 00:19:43.391 "w_mbytes_per_sec": 0 00:19:43.391 }, 00:19:43.391 "claimed": true, 00:19:43.391 "claim_type": "exclusive_write", 00:19:43.391 "zoned": false, 00:19:43.391 "supported_io_types": { 00:19:43.391 "read": true, 00:19:43.391 "write": true, 00:19:43.391 "unmap": true, 00:19:43.391 "flush": true, 00:19:43.391 "reset": true, 00:19:43.391 "nvme_admin": false, 00:19:43.391 "nvme_io": false, 00:19:43.391 "nvme_io_md": false, 00:19:43.391 "write_zeroes": true, 00:19:43.391 "zcopy": true, 00:19:43.391 "get_zone_info": false, 00:19:43.391 "zone_management": false, 00:19:43.391 "zone_append": false, 00:19:43.391 "compare": false, 00:19:43.391 "compare_and_write": false, 00:19:43.391 "abort": true, 00:19:43.391 "seek_hole": false, 00:19:43.391 "seek_data": false, 00:19:43.391 "copy": true, 00:19:43.391 "nvme_iov_md": false 00:19:43.391 }, 00:19:43.391 "memory_domains": [ 00:19:43.391 { 00:19:43.391 "dma_device_id": "system", 00:19:43.391 "dma_device_type": 1 00:19:43.391 }, 00:19:43.391 { 00:19:43.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.391 "dma_device_type": 2 00:19:43.391 } 00:19:43.391 ], 00:19:43.391 "driver_specific": {} 00:19:43.391 } 00:19:43.391 ] 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.391 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.649 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.649 "name": "Existed_Raid", 00:19:43.649 "uuid": "dbd3a384-fba2-482f-a2d4-7dca76a46969", 00:19:43.649 "strip_size_kb": 0, 00:19:43.649 "state": "online", 00:19:43.649 "raid_level": "raid1", 00:19:43.649 "superblock": false, 00:19:43.649 "num_base_bdevs": 3, 00:19:43.649 "num_base_bdevs_discovered": 3, 00:19:43.649 "num_base_bdevs_operational": 3, 00:19:43.649 "base_bdevs_list": [ 00:19:43.649 { 00:19:43.649 "name": "NewBaseBdev", 00:19:43.649 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:43.649 "is_configured": true, 00:19:43.649 "data_offset": 0, 00:19:43.649 "data_size": 65536 00:19:43.649 }, 00:19:43.649 { 00:19:43.649 "name": "BaseBdev2", 00:19:43.649 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:43.649 "is_configured": true, 00:19:43.649 "data_offset": 0, 00:19:43.649 "data_size": 65536 00:19:43.649 }, 00:19:43.649 { 00:19:43.649 "name": "BaseBdev3", 00:19:43.649 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:43.649 "is_configured": true, 00:19:43.649 "data_offset": 0, 00:19:43.649 "data_size": 65536 00:19:43.649 } 00:19:43.649 ] 00:19:43.649 }' 00:19:43.649 02:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.649 02:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:44.217 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:44.476 [2024-07-11 02:25:34.789305] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.476 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:44.476 "name": "Existed_Raid", 00:19:44.476 "aliases": [ 00:19:44.476 "dbd3a384-fba2-482f-a2d4-7dca76a46969" 00:19:44.476 ], 00:19:44.477 "product_name": "Raid Volume", 00:19:44.477 "block_size": 512, 00:19:44.477 "num_blocks": 65536, 00:19:44.477 "uuid": "dbd3a384-fba2-482f-a2d4-7dca76a46969", 00:19:44.477 "assigned_rate_limits": { 00:19:44.477 "rw_ios_per_sec": 0, 00:19:44.477 "rw_mbytes_per_sec": 0, 00:19:44.477 "r_mbytes_per_sec": 0, 00:19:44.477 "w_mbytes_per_sec": 0 00:19:44.477 }, 00:19:44.477 "claimed": false, 00:19:44.477 "zoned": false, 00:19:44.477 "supported_io_types": { 00:19:44.477 "read": true, 00:19:44.477 "write": true, 00:19:44.477 "unmap": false, 00:19:44.477 "flush": false, 00:19:44.477 "reset": true, 00:19:44.477 "nvme_admin": false, 00:19:44.477 "nvme_io": false, 00:19:44.477 "nvme_io_md": false, 00:19:44.477 "write_zeroes": true, 00:19:44.477 "zcopy": false, 00:19:44.477 "get_zone_info": false, 00:19:44.477 "zone_management": false, 00:19:44.477 "zone_append": false, 00:19:44.477 "compare": false, 00:19:44.477 "compare_and_write": false, 00:19:44.477 "abort": false, 00:19:44.477 "seek_hole": false, 00:19:44.477 "seek_data": false, 00:19:44.477 "copy": false, 00:19:44.477 "nvme_iov_md": false 00:19:44.477 }, 00:19:44.477 "memory_domains": [ 00:19:44.477 { 00:19:44.477 "dma_device_id": "system", 00:19:44.477 "dma_device_type": 1 00:19:44.477 }, 00:19:44.477 { 00:19:44.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.477 "dma_device_type": 2 00:19:44.477 }, 00:19:44.477 { 00:19:44.477 "dma_device_id": "system", 00:19:44.477 "dma_device_type": 1 00:19:44.477 }, 00:19:44.477 { 00:19:44.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.477 "dma_device_type": 2 00:19:44.477 }, 00:19:44.477 { 00:19:44.477 "dma_device_id": "system", 00:19:44.477 "dma_device_type": 1 00:19:44.477 }, 00:19:44.477 { 00:19:44.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.477 "dma_device_type": 2 00:19:44.477 } 00:19:44.477 ], 00:19:44.477 "driver_specific": { 00:19:44.477 "raid": { 00:19:44.477 "uuid": "dbd3a384-fba2-482f-a2d4-7dca76a46969", 00:19:44.477 "strip_size_kb": 0, 00:19:44.477 "state": "online", 00:19:44.477 "raid_level": "raid1", 00:19:44.477 "superblock": false, 00:19:44.477 "num_base_bdevs": 3, 00:19:44.477 "num_base_bdevs_discovered": 3, 00:19:44.477 "num_base_bdevs_operational": 3, 00:19:44.477 "base_bdevs_list": [ 00:19:44.477 { 00:19:44.477 "name": "NewBaseBdev", 00:19:44.477 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:44.477 "is_configured": true, 00:19:44.477 "data_offset": 0, 00:19:44.477 "data_size": 65536 00:19:44.477 }, 00:19:44.477 { 00:19:44.477 "name": "BaseBdev2", 00:19:44.477 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:44.477 "is_configured": true, 00:19:44.477 "data_offset": 0, 00:19:44.477 "data_size": 65536 00:19:44.477 }, 00:19:44.477 { 00:19:44.477 "name": "BaseBdev3", 00:19:44.477 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:44.477 "is_configured": true, 00:19:44.477 "data_offset": 0, 00:19:44.477 "data_size": 65536 00:19:44.477 } 00:19:44.477 ] 00:19:44.477 } 00:19:44.477 } 00:19:44.477 }' 00:19:44.477 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:44.477 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:44.477 BaseBdev2 00:19:44.477 BaseBdev3' 00:19:44.477 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:44.477 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:44.477 02:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:44.736 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:44.736 "name": "NewBaseBdev", 00:19:44.736 "aliases": [ 00:19:44.736 "637bccbd-fe18-4720-8185-b0eef49abd0a" 00:19:44.736 ], 00:19:44.736 "product_name": "Malloc disk", 00:19:44.736 "block_size": 512, 00:19:44.736 "num_blocks": 65536, 00:19:44.736 "uuid": "637bccbd-fe18-4720-8185-b0eef49abd0a", 00:19:44.736 "assigned_rate_limits": { 00:19:44.736 "rw_ios_per_sec": 0, 00:19:44.736 "rw_mbytes_per_sec": 0, 00:19:44.736 "r_mbytes_per_sec": 0, 00:19:44.736 "w_mbytes_per_sec": 0 00:19:44.736 }, 00:19:44.736 "claimed": true, 00:19:44.736 "claim_type": "exclusive_write", 00:19:44.736 "zoned": false, 00:19:44.736 "supported_io_types": { 00:19:44.736 "read": true, 00:19:44.736 "write": true, 00:19:44.736 "unmap": true, 00:19:44.736 "flush": true, 00:19:44.736 "reset": true, 00:19:44.736 "nvme_admin": false, 00:19:44.736 "nvme_io": false, 00:19:44.736 "nvme_io_md": false, 00:19:44.736 "write_zeroes": true, 00:19:44.736 "zcopy": true, 00:19:44.736 "get_zone_info": false, 00:19:44.736 "zone_management": false, 00:19:44.736 "zone_append": false, 00:19:44.736 "compare": false, 00:19:44.736 "compare_and_write": false, 00:19:44.736 "abort": true, 00:19:44.736 "seek_hole": false, 00:19:44.736 "seek_data": false, 00:19:44.736 "copy": true, 00:19:44.736 "nvme_iov_md": false 00:19:44.736 }, 00:19:44.736 "memory_domains": [ 00:19:44.736 { 00:19:44.736 "dma_device_id": "system", 00:19:44.736 "dma_device_type": 1 00:19:44.736 }, 00:19:44.736 { 00:19:44.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.736 "dma_device_type": 2 00:19:44.736 } 00:19:44.736 ], 00:19:44.736 "driver_specific": {} 00:19:44.736 }' 00:19:44.736 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.736 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.995 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:44.995 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.995 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.995 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:44.995 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.995 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:45.254 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:45.254 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:45.254 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:45.254 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:45.254 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:45.254 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:45.254 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:45.512 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:45.513 "name": "BaseBdev2", 00:19:45.513 "aliases": [ 00:19:45.513 "a526f3dc-d733-4922-b299-4802c54d0842" 00:19:45.513 ], 00:19:45.513 "product_name": "Malloc disk", 00:19:45.513 "block_size": 512, 00:19:45.513 "num_blocks": 65536, 00:19:45.513 "uuid": "a526f3dc-d733-4922-b299-4802c54d0842", 00:19:45.513 "assigned_rate_limits": { 00:19:45.513 "rw_ios_per_sec": 0, 00:19:45.513 "rw_mbytes_per_sec": 0, 00:19:45.513 "r_mbytes_per_sec": 0, 00:19:45.513 "w_mbytes_per_sec": 0 00:19:45.513 }, 00:19:45.513 "claimed": true, 00:19:45.513 "claim_type": "exclusive_write", 00:19:45.513 "zoned": false, 00:19:45.513 "supported_io_types": { 00:19:45.513 "read": true, 00:19:45.513 "write": true, 00:19:45.513 "unmap": true, 00:19:45.513 "flush": true, 00:19:45.513 "reset": true, 00:19:45.513 "nvme_admin": false, 00:19:45.513 "nvme_io": false, 00:19:45.513 "nvme_io_md": false, 00:19:45.513 "write_zeroes": true, 00:19:45.513 "zcopy": true, 00:19:45.513 "get_zone_info": false, 00:19:45.513 "zone_management": false, 00:19:45.513 "zone_append": false, 00:19:45.513 "compare": false, 00:19:45.513 "compare_and_write": false, 00:19:45.513 "abort": true, 00:19:45.513 "seek_hole": false, 00:19:45.513 "seek_data": false, 00:19:45.513 "copy": true, 00:19:45.513 "nvme_iov_md": false 00:19:45.513 }, 00:19:45.513 "memory_domains": [ 00:19:45.513 { 00:19:45.513 "dma_device_id": "system", 00:19:45.513 "dma_device_type": 1 00:19:45.513 }, 00:19:45.513 { 00:19:45.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.513 "dma_device_type": 2 00:19:45.513 } 00:19:45.513 ], 00:19:45.513 "driver_specific": {} 00:19:45.513 }' 00:19:45.513 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:45.513 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:45.772 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:45.772 02:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:45.772 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:45.772 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:45.772 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:45.772 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:45.772 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:45.772 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:46.030 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:46.030 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:46.030 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:46.030 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:46.030 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:46.289 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:46.289 "name": "BaseBdev3", 00:19:46.289 "aliases": [ 00:19:46.289 "becb80be-de76-43fb-aa09-fd9ceabf3d4e" 00:19:46.289 ], 00:19:46.289 "product_name": "Malloc disk", 00:19:46.289 "block_size": 512, 00:19:46.289 "num_blocks": 65536, 00:19:46.289 "uuid": "becb80be-de76-43fb-aa09-fd9ceabf3d4e", 00:19:46.289 "assigned_rate_limits": { 00:19:46.289 "rw_ios_per_sec": 0, 00:19:46.289 "rw_mbytes_per_sec": 0, 00:19:46.289 "r_mbytes_per_sec": 0, 00:19:46.289 "w_mbytes_per_sec": 0 00:19:46.289 }, 00:19:46.289 "claimed": true, 00:19:46.289 "claim_type": "exclusive_write", 00:19:46.289 "zoned": false, 00:19:46.289 "supported_io_types": { 00:19:46.289 "read": true, 00:19:46.289 "write": true, 00:19:46.289 "unmap": true, 00:19:46.289 "flush": true, 00:19:46.289 "reset": true, 00:19:46.289 "nvme_admin": false, 00:19:46.289 "nvme_io": false, 00:19:46.289 "nvme_io_md": false, 00:19:46.289 "write_zeroes": true, 00:19:46.289 "zcopy": true, 00:19:46.289 "get_zone_info": false, 00:19:46.289 "zone_management": false, 00:19:46.289 "zone_append": false, 00:19:46.289 "compare": false, 00:19:46.289 "compare_and_write": false, 00:19:46.289 "abort": true, 00:19:46.289 "seek_hole": false, 00:19:46.289 "seek_data": false, 00:19:46.289 "copy": true, 00:19:46.289 "nvme_iov_md": false 00:19:46.289 }, 00:19:46.289 "memory_domains": [ 00:19:46.289 { 00:19:46.289 "dma_device_id": "system", 00:19:46.289 "dma_device_type": 1 00:19:46.289 }, 00:19:46.289 { 00:19:46.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.289 "dma_device_type": 2 00:19:46.289 } 00:19:46.289 ], 00:19:46.289 "driver_specific": {} 00:19:46.289 }' 00:19:46.289 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:46.289 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:46.289 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:46.289 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:46.289 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:46.573 02:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:47.161 [2024-07-11 02:25:37.391958] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.161 [2024-07-11 02:25:37.391983] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.161 [2024-07-11 02:25:37.392034] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.161 [2024-07-11 02:25:37.392290] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.161 [2024-07-11 02:25:37.392303] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1892ff0 name Existed_Raid, state offline 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1940326 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1940326 ']' 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1940326 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940326 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940326' 00:19:47.161 killing process with pid 1940326 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1940326 00:19:47.161 [2024-07-11 02:25:37.474272] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:47.161 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1940326 00:19:47.161 [2024-07-11 02:25:37.500055] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:47.421 00:19:47.421 real 0m30.888s 00:19:47.421 user 0m57.258s 00:19:47.421 sys 0m5.507s 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.421 ************************************ 00:19:47.421 END TEST raid_state_function_test 00:19:47.421 ************************************ 00:19:47.421 02:25:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:47.421 02:25:37 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:19:47.421 02:25:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:47.421 02:25:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.421 02:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.421 ************************************ 00:19:47.421 START TEST raid_state_function_test_sb 00:19:47.421 ************************************ 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1944813 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1944813' 00:19:47.421 Process raid pid: 1944813 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1944813 /var/tmp/spdk-raid.sock 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1944813 ']' 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:47.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.421 02:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.421 [2024-07-11 02:25:37.837511] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:19:47.421 [2024-07-11 02:25:37.837580] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.681 [2024-07-11 02:25:37.976502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.681 [2024-07-11 02:25:38.029837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.681 [2024-07-11 02:25:38.103789] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.681 [2024-07-11 02:25:38.103823] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.618 02:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.618 02:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:48.618 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:48.878 [2024-07-11 02:25:39.265419] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:48.878 [2024-07-11 02:25:39.265463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:48.878 [2024-07-11 02:25:39.265474] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:48.878 [2024-07-11 02:25:39.265485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:48.878 [2024-07-11 02:25:39.265494] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:48.878 [2024-07-11 02:25:39.265505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.878 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.136 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.136 "name": "Existed_Raid", 00:19:49.136 "uuid": "3e65a795-3921-43b5-838f-6fcd4d1a624a", 00:19:49.136 "strip_size_kb": 0, 00:19:49.136 "state": "configuring", 00:19:49.136 "raid_level": "raid1", 00:19:49.136 "superblock": true, 00:19:49.136 "num_base_bdevs": 3, 00:19:49.136 "num_base_bdevs_discovered": 0, 00:19:49.136 "num_base_bdevs_operational": 3, 00:19:49.136 "base_bdevs_list": [ 00:19:49.136 { 00:19:49.136 "name": "BaseBdev1", 00:19:49.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.136 "is_configured": false, 00:19:49.136 "data_offset": 0, 00:19:49.136 "data_size": 0 00:19:49.136 }, 00:19:49.136 { 00:19:49.136 "name": "BaseBdev2", 00:19:49.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.136 "is_configured": false, 00:19:49.136 "data_offset": 0, 00:19:49.136 "data_size": 0 00:19:49.136 }, 00:19:49.136 { 00:19:49.136 "name": "BaseBdev3", 00:19:49.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.136 "is_configured": false, 00:19:49.136 "data_offset": 0, 00:19:49.136 "data_size": 0 00:19:49.136 } 00:19:49.136 ] 00:19:49.136 }' 00:19:49.136 02:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.136 02:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.071 02:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:50.071 [2024-07-11 02:25:40.360159] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:50.071 [2024-07-11 02:25:40.360193] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xebf5a0 name Existed_Raid, state configuring 00:19:50.071 02:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:50.330 [2024-07-11 02:25:40.608842] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.330 [2024-07-11 02:25:40.608870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.330 [2024-07-11 02:25:40.608880] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.330 [2024-07-11 02:25:40.608891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.330 [2024-07-11 02:25:40.608900] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:50.330 [2024-07-11 02:25:40.608911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:50.330 02:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:50.594 [2024-07-11 02:25:40.807401] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:50.594 BaseBdev1 00:19:50.594 02:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:50.594 02:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:50.594 02:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:50.594 02:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:50.594 02:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:50.594 02:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:50.594 02:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.853 02:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:51.111 [ 00:19:51.111 { 00:19:51.111 "name": "BaseBdev1", 00:19:51.111 "aliases": [ 00:19:51.111 "55a483d1-f043-4c96-a3c1-d49bb6c33a48" 00:19:51.111 ], 00:19:51.111 "product_name": "Malloc disk", 00:19:51.111 "block_size": 512, 00:19:51.111 "num_blocks": 65536, 00:19:51.111 "uuid": "55a483d1-f043-4c96-a3c1-d49bb6c33a48", 00:19:51.111 "assigned_rate_limits": { 00:19:51.111 "rw_ios_per_sec": 0, 00:19:51.111 "rw_mbytes_per_sec": 0, 00:19:51.111 "r_mbytes_per_sec": 0, 00:19:51.111 "w_mbytes_per_sec": 0 00:19:51.111 }, 00:19:51.111 "claimed": true, 00:19:51.111 "claim_type": "exclusive_write", 00:19:51.111 "zoned": false, 00:19:51.111 "supported_io_types": { 00:19:51.111 "read": true, 00:19:51.111 "write": true, 00:19:51.111 "unmap": true, 00:19:51.111 "flush": true, 00:19:51.111 "reset": true, 00:19:51.111 "nvme_admin": false, 00:19:51.111 "nvme_io": false, 00:19:51.111 "nvme_io_md": false, 00:19:51.111 "write_zeroes": true, 00:19:51.111 "zcopy": true, 00:19:51.111 "get_zone_info": false, 00:19:51.111 "zone_management": false, 00:19:51.111 "zone_append": false, 00:19:51.111 "compare": false, 00:19:51.111 "compare_and_write": false, 00:19:51.111 "abort": true, 00:19:51.111 "seek_hole": false, 00:19:51.111 "seek_data": false, 00:19:51.111 "copy": true, 00:19:51.111 "nvme_iov_md": false 00:19:51.111 }, 00:19:51.111 "memory_domains": [ 00:19:51.111 { 00:19:51.111 "dma_device_id": "system", 00:19:51.111 "dma_device_type": 1 00:19:51.111 }, 00:19:51.111 { 00:19:51.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.111 "dma_device_type": 2 00:19:51.111 } 00:19:51.111 ], 00:19:51.111 "driver_specific": {} 00:19:51.111 } 00:19:51.111 ] 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.111 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.370 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.370 "name": "Existed_Raid", 00:19:51.370 "uuid": "5912a009-315a-4b44-b778-38e86e59d83e", 00:19:51.370 "strip_size_kb": 0, 00:19:51.370 "state": "configuring", 00:19:51.370 "raid_level": "raid1", 00:19:51.370 "superblock": true, 00:19:51.370 "num_base_bdevs": 3, 00:19:51.370 "num_base_bdevs_discovered": 1, 00:19:51.370 "num_base_bdevs_operational": 3, 00:19:51.370 "base_bdevs_list": [ 00:19:51.370 { 00:19:51.370 "name": "BaseBdev1", 00:19:51.370 "uuid": "55a483d1-f043-4c96-a3c1-d49bb6c33a48", 00:19:51.370 "is_configured": true, 00:19:51.370 "data_offset": 2048, 00:19:51.370 "data_size": 63488 00:19:51.370 }, 00:19:51.370 { 00:19:51.370 "name": "BaseBdev2", 00:19:51.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.370 "is_configured": false, 00:19:51.370 "data_offset": 0, 00:19:51.370 "data_size": 0 00:19:51.370 }, 00:19:51.370 { 00:19:51.370 "name": "BaseBdev3", 00:19:51.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.370 "is_configured": false, 00:19:51.370 "data_offset": 0, 00:19:51.370 "data_size": 0 00:19:51.370 } 00:19:51.370 ] 00:19:51.370 }' 00:19:51.370 02:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.370 02:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.938 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:52.197 [2024-07-11 02:25:42.379577] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.197 [2024-07-11 02:25:42.379613] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xebeed0 name Existed_Raid, state configuring 00:19:52.197 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:52.456 [2024-07-11 02:25:42.628268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.456 [2024-07-11 02:25:42.629661] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.456 [2024-07-11 02:25:42.629692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.456 [2024-07-11 02:25:42.629703] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:52.456 [2024-07-11 02:25:42.629714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.456 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.715 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.715 "name": "Existed_Raid", 00:19:52.715 "uuid": "92a4baef-931f-4436-8d01-d7377cd781ff", 00:19:52.715 "strip_size_kb": 0, 00:19:52.715 "state": "configuring", 00:19:52.715 "raid_level": "raid1", 00:19:52.715 "superblock": true, 00:19:52.715 "num_base_bdevs": 3, 00:19:52.715 "num_base_bdevs_discovered": 1, 00:19:52.715 "num_base_bdevs_operational": 3, 00:19:52.715 "base_bdevs_list": [ 00:19:52.715 { 00:19:52.715 "name": "BaseBdev1", 00:19:52.715 "uuid": "55a483d1-f043-4c96-a3c1-d49bb6c33a48", 00:19:52.715 "is_configured": true, 00:19:52.715 "data_offset": 2048, 00:19:52.715 "data_size": 63488 00:19:52.715 }, 00:19:52.715 { 00:19:52.715 "name": "BaseBdev2", 00:19:52.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.715 "is_configured": false, 00:19:52.715 "data_offset": 0, 00:19:52.715 "data_size": 0 00:19:52.715 }, 00:19:52.715 { 00:19:52.715 "name": "BaseBdev3", 00:19:52.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.715 "is_configured": false, 00:19:52.715 "data_offset": 0, 00:19:52.715 "data_size": 0 00:19:52.715 } 00:19:52.715 ] 00:19:52.715 }' 00:19:52.715 02:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.715 02:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:53.283 [2024-07-11 02:25:43.670268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:53.283 BaseBdev2 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:53.283 02:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:53.541 02:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:53.800 [ 00:19:53.800 { 00:19:53.800 "name": "BaseBdev2", 00:19:53.800 "aliases": [ 00:19:53.800 "9bba95a5-7261-45da-93dc-d9ce18b912ed" 00:19:53.800 ], 00:19:53.800 "product_name": "Malloc disk", 00:19:53.800 "block_size": 512, 00:19:53.800 "num_blocks": 65536, 00:19:53.800 "uuid": "9bba95a5-7261-45da-93dc-d9ce18b912ed", 00:19:53.800 "assigned_rate_limits": { 00:19:53.800 "rw_ios_per_sec": 0, 00:19:53.800 "rw_mbytes_per_sec": 0, 00:19:53.800 "r_mbytes_per_sec": 0, 00:19:53.800 "w_mbytes_per_sec": 0 00:19:53.800 }, 00:19:53.800 "claimed": true, 00:19:53.800 "claim_type": "exclusive_write", 00:19:53.800 "zoned": false, 00:19:53.800 "supported_io_types": { 00:19:53.800 "read": true, 00:19:53.800 "write": true, 00:19:53.800 "unmap": true, 00:19:53.800 "flush": true, 00:19:53.800 "reset": true, 00:19:53.800 "nvme_admin": false, 00:19:53.800 "nvme_io": false, 00:19:53.800 "nvme_io_md": false, 00:19:53.801 "write_zeroes": true, 00:19:53.801 "zcopy": true, 00:19:53.801 "get_zone_info": false, 00:19:53.801 "zone_management": false, 00:19:53.801 "zone_append": false, 00:19:53.801 "compare": false, 00:19:53.801 "compare_and_write": false, 00:19:53.801 "abort": true, 00:19:53.801 "seek_hole": false, 00:19:53.801 "seek_data": false, 00:19:53.801 "copy": true, 00:19:53.801 "nvme_iov_md": false 00:19:53.801 }, 00:19:53.801 "memory_domains": [ 00:19:53.801 { 00:19:53.801 "dma_device_id": "system", 00:19:53.801 "dma_device_type": 1 00:19:53.801 }, 00:19:53.801 { 00:19:53.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.801 "dma_device_type": 2 00:19:53.801 } 00:19:53.801 ], 00:19:53.801 "driver_specific": {} 00:19:53.801 } 00:19:53.801 ] 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.801 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.061 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.061 "name": "Existed_Raid", 00:19:54.061 "uuid": "92a4baef-931f-4436-8d01-d7377cd781ff", 00:19:54.061 "strip_size_kb": 0, 00:19:54.061 "state": "configuring", 00:19:54.061 "raid_level": "raid1", 00:19:54.061 "superblock": true, 00:19:54.061 "num_base_bdevs": 3, 00:19:54.061 "num_base_bdevs_discovered": 2, 00:19:54.061 "num_base_bdevs_operational": 3, 00:19:54.061 "base_bdevs_list": [ 00:19:54.061 { 00:19:54.061 "name": "BaseBdev1", 00:19:54.061 "uuid": "55a483d1-f043-4c96-a3c1-d49bb6c33a48", 00:19:54.061 "is_configured": true, 00:19:54.061 "data_offset": 2048, 00:19:54.061 "data_size": 63488 00:19:54.061 }, 00:19:54.061 { 00:19:54.061 "name": "BaseBdev2", 00:19:54.061 "uuid": "9bba95a5-7261-45da-93dc-d9ce18b912ed", 00:19:54.061 "is_configured": true, 00:19:54.061 "data_offset": 2048, 00:19:54.061 "data_size": 63488 00:19:54.061 }, 00:19:54.061 { 00:19:54.061 "name": "BaseBdev3", 00:19:54.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.061 "is_configured": false, 00:19:54.061 "data_offset": 0, 00:19:54.061 "data_size": 0 00:19:54.061 } 00:19:54.061 ] 00:19:54.061 }' 00:19:54.061 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.061 02:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.630 02:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:54.888 [2024-07-11 02:25:45.133490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:54.888 [2024-07-11 02:25:45.133643] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1071cf0 00:19:54.888 [2024-07-11 02:25:45.133657] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:54.888 [2024-07-11 02:25:45.133837] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xec27c0 00:19:54.888 [2024-07-11 02:25:45.133961] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1071cf0 00:19:54.888 [2024-07-11 02:25:45.133971] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1071cf0 00:19:54.888 [2024-07-11 02:25:45.134063] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.888 BaseBdev3 00:19:54.888 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:54.888 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:54.888 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:54.888 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:54.888 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:54.888 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:54.888 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:55.145 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:55.403 [ 00:19:55.403 { 00:19:55.403 "name": "BaseBdev3", 00:19:55.403 "aliases": [ 00:19:55.403 "b54f437a-92d8-45bc-89d8-f291b149da4c" 00:19:55.403 ], 00:19:55.403 "product_name": "Malloc disk", 00:19:55.403 "block_size": 512, 00:19:55.403 "num_blocks": 65536, 00:19:55.403 "uuid": "b54f437a-92d8-45bc-89d8-f291b149da4c", 00:19:55.403 "assigned_rate_limits": { 00:19:55.403 "rw_ios_per_sec": 0, 00:19:55.403 "rw_mbytes_per_sec": 0, 00:19:55.403 "r_mbytes_per_sec": 0, 00:19:55.403 "w_mbytes_per_sec": 0 00:19:55.403 }, 00:19:55.403 "claimed": true, 00:19:55.403 "claim_type": "exclusive_write", 00:19:55.403 "zoned": false, 00:19:55.403 "supported_io_types": { 00:19:55.403 "read": true, 00:19:55.403 "write": true, 00:19:55.403 "unmap": true, 00:19:55.403 "flush": true, 00:19:55.403 "reset": true, 00:19:55.403 "nvme_admin": false, 00:19:55.403 "nvme_io": false, 00:19:55.403 "nvme_io_md": false, 00:19:55.403 "write_zeroes": true, 00:19:55.403 "zcopy": true, 00:19:55.403 "get_zone_info": false, 00:19:55.403 "zone_management": false, 00:19:55.403 "zone_append": false, 00:19:55.403 "compare": false, 00:19:55.403 "compare_and_write": false, 00:19:55.403 "abort": true, 00:19:55.403 "seek_hole": false, 00:19:55.403 "seek_data": false, 00:19:55.403 "copy": true, 00:19:55.403 "nvme_iov_md": false 00:19:55.403 }, 00:19:55.403 "memory_domains": [ 00:19:55.403 { 00:19:55.403 "dma_device_id": "system", 00:19:55.403 "dma_device_type": 1 00:19:55.403 }, 00:19:55.403 { 00:19:55.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.403 "dma_device_type": 2 00:19:55.403 } 00:19:55.403 ], 00:19:55.403 "driver_specific": {} 00:19:55.403 } 00:19:55.403 ] 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.403 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.661 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.661 "name": "Existed_Raid", 00:19:55.661 "uuid": "92a4baef-931f-4436-8d01-d7377cd781ff", 00:19:55.661 "strip_size_kb": 0, 00:19:55.661 "state": "online", 00:19:55.661 "raid_level": "raid1", 00:19:55.661 "superblock": true, 00:19:55.661 "num_base_bdevs": 3, 00:19:55.661 "num_base_bdevs_discovered": 3, 00:19:55.661 "num_base_bdevs_operational": 3, 00:19:55.661 "base_bdevs_list": [ 00:19:55.661 { 00:19:55.661 "name": "BaseBdev1", 00:19:55.661 "uuid": "55a483d1-f043-4c96-a3c1-d49bb6c33a48", 00:19:55.661 "is_configured": true, 00:19:55.661 "data_offset": 2048, 00:19:55.661 "data_size": 63488 00:19:55.661 }, 00:19:55.661 { 00:19:55.661 "name": "BaseBdev2", 00:19:55.661 "uuid": "9bba95a5-7261-45da-93dc-d9ce18b912ed", 00:19:55.661 "is_configured": true, 00:19:55.661 "data_offset": 2048, 00:19:55.661 "data_size": 63488 00:19:55.661 }, 00:19:55.661 { 00:19:55.661 "name": "BaseBdev3", 00:19:55.661 "uuid": "b54f437a-92d8-45bc-89d8-f291b149da4c", 00:19:55.661 "is_configured": true, 00:19:55.661 "data_offset": 2048, 00:19:55.661 "data_size": 63488 00:19:55.661 } 00:19:55.661 ] 00:19:55.661 }' 00:19:55.661 02:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.661 02:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.227 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:56.228 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:56.228 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:56.228 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:56.228 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:56.228 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:56.228 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:56.228 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:56.507 [2024-07-11 02:25:46.677897] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.507 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:56.507 "name": "Existed_Raid", 00:19:56.507 "aliases": [ 00:19:56.507 "92a4baef-931f-4436-8d01-d7377cd781ff" 00:19:56.507 ], 00:19:56.507 "product_name": "Raid Volume", 00:19:56.507 "block_size": 512, 00:19:56.507 "num_blocks": 63488, 00:19:56.507 "uuid": "92a4baef-931f-4436-8d01-d7377cd781ff", 00:19:56.507 "assigned_rate_limits": { 00:19:56.507 "rw_ios_per_sec": 0, 00:19:56.507 "rw_mbytes_per_sec": 0, 00:19:56.507 "r_mbytes_per_sec": 0, 00:19:56.507 "w_mbytes_per_sec": 0 00:19:56.507 }, 00:19:56.507 "claimed": false, 00:19:56.507 "zoned": false, 00:19:56.507 "supported_io_types": { 00:19:56.507 "read": true, 00:19:56.507 "write": true, 00:19:56.507 "unmap": false, 00:19:56.507 "flush": false, 00:19:56.507 "reset": true, 00:19:56.507 "nvme_admin": false, 00:19:56.507 "nvme_io": false, 00:19:56.507 "nvme_io_md": false, 00:19:56.507 "write_zeroes": true, 00:19:56.507 "zcopy": false, 00:19:56.507 "get_zone_info": false, 00:19:56.507 "zone_management": false, 00:19:56.507 "zone_append": false, 00:19:56.507 "compare": false, 00:19:56.507 "compare_and_write": false, 00:19:56.507 "abort": false, 00:19:56.507 "seek_hole": false, 00:19:56.507 "seek_data": false, 00:19:56.507 "copy": false, 00:19:56.507 "nvme_iov_md": false 00:19:56.507 }, 00:19:56.507 "memory_domains": [ 00:19:56.507 { 00:19:56.507 "dma_device_id": "system", 00:19:56.507 "dma_device_type": 1 00:19:56.507 }, 00:19:56.507 { 00:19:56.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.507 "dma_device_type": 2 00:19:56.507 }, 00:19:56.507 { 00:19:56.507 "dma_device_id": "system", 00:19:56.507 "dma_device_type": 1 00:19:56.507 }, 00:19:56.507 { 00:19:56.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.507 "dma_device_type": 2 00:19:56.507 }, 00:19:56.507 { 00:19:56.507 "dma_device_id": "system", 00:19:56.507 "dma_device_type": 1 00:19:56.507 }, 00:19:56.507 { 00:19:56.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.507 "dma_device_type": 2 00:19:56.507 } 00:19:56.507 ], 00:19:56.507 "driver_specific": { 00:19:56.507 "raid": { 00:19:56.507 "uuid": "92a4baef-931f-4436-8d01-d7377cd781ff", 00:19:56.507 "strip_size_kb": 0, 00:19:56.507 "state": "online", 00:19:56.507 "raid_level": "raid1", 00:19:56.507 "superblock": true, 00:19:56.507 "num_base_bdevs": 3, 00:19:56.507 "num_base_bdevs_discovered": 3, 00:19:56.507 "num_base_bdevs_operational": 3, 00:19:56.507 "base_bdevs_list": [ 00:19:56.507 { 00:19:56.507 "name": "BaseBdev1", 00:19:56.507 "uuid": "55a483d1-f043-4c96-a3c1-d49bb6c33a48", 00:19:56.507 "is_configured": true, 00:19:56.507 "data_offset": 2048, 00:19:56.507 "data_size": 63488 00:19:56.507 }, 00:19:56.507 { 00:19:56.507 "name": "BaseBdev2", 00:19:56.507 "uuid": "9bba95a5-7261-45da-93dc-d9ce18b912ed", 00:19:56.507 "is_configured": true, 00:19:56.507 "data_offset": 2048, 00:19:56.507 "data_size": 63488 00:19:56.507 }, 00:19:56.507 { 00:19:56.507 "name": "BaseBdev3", 00:19:56.507 "uuid": "b54f437a-92d8-45bc-89d8-f291b149da4c", 00:19:56.507 "is_configured": true, 00:19:56.507 "data_offset": 2048, 00:19:56.508 "data_size": 63488 00:19:56.508 } 00:19:56.508 ] 00:19:56.508 } 00:19:56.508 } 00:19:56.508 }' 00:19:56.508 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:56.508 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:56.508 BaseBdev2 00:19:56.508 BaseBdev3' 00:19:56.508 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:56.508 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:56.508 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.766 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.766 "name": "BaseBdev1", 00:19:56.766 "aliases": [ 00:19:56.766 "55a483d1-f043-4c96-a3c1-d49bb6c33a48" 00:19:56.766 ], 00:19:56.766 "product_name": "Malloc disk", 00:19:56.766 "block_size": 512, 00:19:56.766 "num_blocks": 65536, 00:19:56.766 "uuid": "55a483d1-f043-4c96-a3c1-d49bb6c33a48", 00:19:56.766 "assigned_rate_limits": { 00:19:56.766 "rw_ios_per_sec": 0, 00:19:56.766 "rw_mbytes_per_sec": 0, 00:19:56.766 "r_mbytes_per_sec": 0, 00:19:56.766 "w_mbytes_per_sec": 0 00:19:56.766 }, 00:19:56.766 "claimed": true, 00:19:56.766 "claim_type": "exclusive_write", 00:19:56.766 "zoned": false, 00:19:56.766 "supported_io_types": { 00:19:56.766 "read": true, 00:19:56.766 "write": true, 00:19:56.766 "unmap": true, 00:19:56.766 "flush": true, 00:19:56.766 "reset": true, 00:19:56.766 "nvme_admin": false, 00:19:56.766 "nvme_io": false, 00:19:56.766 "nvme_io_md": false, 00:19:56.766 "write_zeroes": true, 00:19:56.766 "zcopy": true, 00:19:56.766 "get_zone_info": false, 00:19:56.766 "zone_management": false, 00:19:56.766 "zone_append": false, 00:19:56.766 "compare": false, 00:19:56.766 "compare_and_write": false, 00:19:56.766 "abort": true, 00:19:56.766 "seek_hole": false, 00:19:56.766 "seek_data": false, 00:19:56.766 "copy": true, 00:19:56.766 "nvme_iov_md": false 00:19:56.766 }, 00:19:56.766 "memory_domains": [ 00:19:56.766 { 00:19:56.766 "dma_device_id": "system", 00:19:56.766 "dma_device_type": 1 00:19:56.766 }, 00:19:56.766 { 00:19:56.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.766 "dma_device_type": 2 00:19:56.766 } 00:19:56.766 ], 00:19:56.766 "driver_specific": {} 00:19:56.766 }' 00:19:56.766 02:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.766 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.766 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.766 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.766 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.766 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.766 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.025 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:57.284 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.284 "name": "BaseBdev2", 00:19:57.284 "aliases": [ 00:19:57.284 "9bba95a5-7261-45da-93dc-d9ce18b912ed" 00:19:57.284 ], 00:19:57.284 "product_name": "Malloc disk", 00:19:57.284 "block_size": 512, 00:19:57.284 "num_blocks": 65536, 00:19:57.284 "uuid": "9bba95a5-7261-45da-93dc-d9ce18b912ed", 00:19:57.284 "assigned_rate_limits": { 00:19:57.284 "rw_ios_per_sec": 0, 00:19:57.284 "rw_mbytes_per_sec": 0, 00:19:57.284 "r_mbytes_per_sec": 0, 00:19:57.284 "w_mbytes_per_sec": 0 00:19:57.284 }, 00:19:57.284 "claimed": true, 00:19:57.284 "claim_type": "exclusive_write", 00:19:57.284 "zoned": false, 00:19:57.284 "supported_io_types": { 00:19:57.284 "read": true, 00:19:57.284 "write": true, 00:19:57.284 "unmap": true, 00:19:57.284 "flush": true, 00:19:57.284 "reset": true, 00:19:57.284 "nvme_admin": false, 00:19:57.284 "nvme_io": false, 00:19:57.284 "nvme_io_md": false, 00:19:57.284 "write_zeroes": true, 00:19:57.284 "zcopy": true, 00:19:57.284 "get_zone_info": false, 00:19:57.284 "zone_management": false, 00:19:57.284 "zone_append": false, 00:19:57.284 "compare": false, 00:19:57.284 "compare_and_write": false, 00:19:57.284 "abort": true, 00:19:57.284 "seek_hole": false, 00:19:57.284 "seek_data": false, 00:19:57.284 "copy": true, 00:19:57.284 "nvme_iov_md": false 00:19:57.284 }, 00:19:57.284 "memory_domains": [ 00:19:57.284 { 00:19:57.284 "dma_device_id": "system", 00:19:57.284 "dma_device_type": 1 00:19:57.284 }, 00:19:57.284 { 00:19:57.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.284 "dma_device_type": 2 00:19:57.284 } 00:19:57.284 ], 00:19:57.284 "driver_specific": {} 00:19:57.284 }' 00:19:57.284 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.284 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.284 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:57.284 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:57.543 02:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.803 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.803 "name": "BaseBdev3", 00:19:57.803 "aliases": [ 00:19:57.803 "b54f437a-92d8-45bc-89d8-f291b149da4c" 00:19:57.803 ], 00:19:57.803 "product_name": "Malloc disk", 00:19:57.803 "block_size": 512, 00:19:57.803 "num_blocks": 65536, 00:19:57.803 "uuid": "b54f437a-92d8-45bc-89d8-f291b149da4c", 00:19:57.803 "assigned_rate_limits": { 00:19:57.803 "rw_ios_per_sec": 0, 00:19:57.803 "rw_mbytes_per_sec": 0, 00:19:57.803 "r_mbytes_per_sec": 0, 00:19:57.803 "w_mbytes_per_sec": 0 00:19:57.803 }, 00:19:57.803 "claimed": true, 00:19:57.803 "claim_type": "exclusive_write", 00:19:57.803 "zoned": false, 00:19:57.803 "supported_io_types": { 00:19:57.803 "read": true, 00:19:57.803 "write": true, 00:19:57.803 "unmap": true, 00:19:57.803 "flush": true, 00:19:57.803 "reset": true, 00:19:57.803 "nvme_admin": false, 00:19:57.803 "nvme_io": false, 00:19:57.803 "nvme_io_md": false, 00:19:57.803 "write_zeroes": true, 00:19:57.803 "zcopy": true, 00:19:57.803 "get_zone_info": false, 00:19:57.803 "zone_management": false, 00:19:57.803 "zone_append": false, 00:19:57.803 "compare": false, 00:19:57.803 "compare_and_write": false, 00:19:57.803 "abort": true, 00:19:57.803 "seek_hole": false, 00:19:57.803 "seek_data": false, 00:19:57.803 "copy": true, 00:19:57.803 "nvme_iov_md": false 00:19:57.803 }, 00:19:57.803 "memory_domains": [ 00:19:57.803 { 00:19:57.803 "dma_device_id": "system", 00:19:57.803 "dma_device_type": 1 00:19:57.803 }, 00:19:57.803 { 00:19:57.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.803 "dma_device_type": 2 00:19:57.803 } 00:19:57.803 ], 00:19:57.803 "driver_specific": {} 00:19:57.803 }' 00:19:57.803 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:58.061 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.320 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.320 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:58.320 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:58.578 [2024-07-11 02:25:48.763190] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.578 02:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.837 02:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.837 "name": "Existed_Raid", 00:19:58.837 "uuid": "92a4baef-931f-4436-8d01-d7377cd781ff", 00:19:58.837 "strip_size_kb": 0, 00:19:58.837 "state": "online", 00:19:58.837 "raid_level": "raid1", 00:19:58.837 "superblock": true, 00:19:58.837 "num_base_bdevs": 3, 00:19:58.837 "num_base_bdevs_discovered": 2, 00:19:58.837 "num_base_bdevs_operational": 2, 00:19:58.837 "base_bdevs_list": [ 00:19:58.837 { 00:19:58.837 "name": null, 00:19:58.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.837 "is_configured": false, 00:19:58.837 "data_offset": 2048, 00:19:58.837 "data_size": 63488 00:19:58.837 }, 00:19:58.837 { 00:19:58.837 "name": "BaseBdev2", 00:19:58.837 "uuid": "9bba95a5-7261-45da-93dc-d9ce18b912ed", 00:19:58.837 "is_configured": true, 00:19:58.837 "data_offset": 2048, 00:19:58.837 "data_size": 63488 00:19:58.837 }, 00:19:58.837 { 00:19:58.837 "name": "BaseBdev3", 00:19:58.837 "uuid": "b54f437a-92d8-45bc-89d8-f291b149da4c", 00:19:58.837 "is_configured": true, 00:19:58.837 "data_offset": 2048, 00:19:58.837 "data_size": 63488 00:19:58.837 } 00:19:58.837 ] 00:19:58.837 }' 00:19:58.837 02:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.837 02:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.403 02:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:59.403 02:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:59.403 02:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.403 02:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:59.661 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:59.661 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:59.661 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:59.919 [2024-07-11 02:25:50.273319] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:59.919 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:59.919 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:59.919 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.919 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:00.177 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:00.177 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:00.177 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:00.436 [2024-07-11 02:25:50.778561] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:00.436 [2024-07-11 02:25:50.778642] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.436 [2024-07-11 02:25:50.790886] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.436 [2024-07-11 02:25:50.790921] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.436 [2024-07-11 02:25:50.790932] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1071cf0 name Existed_Raid, state offline 00:20:00.436 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:00.436 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:00.436 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.436 02:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:00.695 02:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:00.695 02:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:00.695 02:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:00.695 02:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:00.695 02:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:00.695 02:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:01.263 BaseBdev2 00:20:01.263 02:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:01.263 02:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:01.263 02:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:01.263 02:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:01.263 02:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:01.263 02:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:01.263 02:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.829 02:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.397 [ 00:20:02.397 { 00:20:02.397 "name": "BaseBdev2", 00:20:02.397 "aliases": [ 00:20:02.397 "b8f49e9c-f107-4768-a2fc-99b87e5117a6" 00:20:02.397 ], 00:20:02.397 "product_name": "Malloc disk", 00:20:02.397 "block_size": 512, 00:20:02.397 "num_blocks": 65536, 00:20:02.397 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:02.397 "assigned_rate_limits": { 00:20:02.397 "rw_ios_per_sec": 0, 00:20:02.397 "rw_mbytes_per_sec": 0, 00:20:02.397 "r_mbytes_per_sec": 0, 00:20:02.397 "w_mbytes_per_sec": 0 00:20:02.397 }, 00:20:02.397 "claimed": false, 00:20:02.397 "zoned": false, 00:20:02.397 "supported_io_types": { 00:20:02.397 "read": true, 00:20:02.397 "write": true, 00:20:02.397 "unmap": true, 00:20:02.397 "flush": true, 00:20:02.397 "reset": true, 00:20:02.397 "nvme_admin": false, 00:20:02.397 "nvme_io": false, 00:20:02.397 "nvme_io_md": false, 00:20:02.397 "write_zeroes": true, 00:20:02.397 "zcopy": true, 00:20:02.397 "get_zone_info": false, 00:20:02.397 "zone_management": false, 00:20:02.397 "zone_append": false, 00:20:02.397 "compare": false, 00:20:02.397 "compare_and_write": false, 00:20:02.397 "abort": true, 00:20:02.397 "seek_hole": false, 00:20:02.397 "seek_data": false, 00:20:02.397 "copy": true, 00:20:02.397 "nvme_iov_md": false 00:20:02.397 }, 00:20:02.397 "memory_domains": [ 00:20:02.397 { 00:20:02.397 "dma_device_id": "system", 00:20:02.397 "dma_device_type": 1 00:20:02.397 }, 00:20:02.397 { 00:20:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.397 "dma_device_type": 2 00:20:02.397 } 00:20:02.397 ], 00:20:02.397 "driver_specific": {} 00:20:02.397 } 00:20:02.397 ] 00:20:02.397 02:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:02.397 02:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:02.397 02:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:02.397 02:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:02.966 BaseBdev3 00:20:02.966 02:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:02.966 02:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:02.966 02:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:02.966 02:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:02.966 02:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:02.966 02:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:02.966 02:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:03.225 02:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:03.792 [ 00:20:03.792 { 00:20:03.792 "name": "BaseBdev3", 00:20:03.792 "aliases": [ 00:20:03.792 "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea" 00:20:03.792 ], 00:20:03.792 "product_name": "Malloc disk", 00:20:03.792 "block_size": 512, 00:20:03.792 "num_blocks": 65536, 00:20:03.792 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:03.792 "assigned_rate_limits": { 00:20:03.792 "rw_ios_per_sec": 0, 00:20:03.792 "rw_mbytes_per_sec": 0, 00:20:03.792 "r_mbytes_per_sec": 0, 00:20:03.792 "w_mbytes_per_sec": 0 00:20:03.792 }, 00:20:03.792 "claimed": false, 00:20:03.792 "zoned": false, 00:20:03.792 "supported_io_types": { 00:20:03.792 "read": true, 00:20:03.792 "write": true, 00:20:03.792 "unmap": true, 00:20:03.792 "flush": true, 00:20:03.792 "reset": true, 00:20:03.792 "nvme_admin": false, 00:20:03.792 "nvme_io": false, 00:20:03.792 "nvme_io_md": false, 00:20:03.792 "write_zeroes": true, 00:20:03.792 "zcopy": true, 00:20:03.792 "get_zone_info": false, 00:20:03.792 "zone_management": false, 00:20:03.792 "zone_append": false, 00:20:03.792 "compare": false, 00:20:03.792 "compare_and_write": false, 00:20:03.792 "abort": true, 00:20:03.792 "seek_hole": false, 00:20:03.792 "seek_data": false, 00:20:03.792 "copy": true, 00:20:03.792 "nvme_iov_md": false 00:20:03.792 }, 00:20:03.792 "memory_domains": [ 00:20:03.792 { 00:20:03.792 "dma_device_id": "system", 00:20:03.792 "dma_device_type": 1 00:20:03.792 }, 00:20:03.792 { 00:20:03.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.792 "dma_device_type": 2 00:20:03.792 } 00:20:03.792 ], 00:20:03.792 "driver_specific": {} 00:20:03.792 } 00:20:03.792 ] 00:20:03.792 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:03.792 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:03.792 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:03.792 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:04.360 [2024-07-11 02:25:54.637113] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:04.360 [2024-07-11 02:25:54.637151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:04.360 [2024-07-11 02:25:54.637170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.360 [2024-07-11 02:25:54.638490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.360 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.929 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.929 "name": "Existed_Raid", 00:20:04.929 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:04.929 "strip_size_kb": 0, 00:20:04.929 "state": "configuring", 00:20:04.929 "raid_level": "raid1", 00:20:04.929 "superblock": true, 00:20:04.929 "num_base_bdevs": 3, 00:20:04.929 "num_base_bdevs_discovered": 2, 00:20:04.929 "num_base_bdevs_operational": 3, 00:20:04.929 "base_bdevs_list": [ 00:20:04.929 { 00:20:04.929 "name": "BaseBdev1", 00:20:04.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.929 "is_configured": false, 00:20:04.929 "data_offset": 0, 00:20:04.929 "data_size": 0 00:20:04.929 }, 00:20:04.929 { 00:20:04.929 "name": "BaseBdev2", 00:20:04.929 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:04.929 "is_configured": true, 00:20:04.929 "data_offset": 2048, 00:20:04.929 "data_size": 63488 00:20:04.929 }, 00:20:04.929 { 00:20:04.929 "name": "BaseBdev3", 00:20:04.929 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:04.929 "is_configured": true, 00:20:04.929 "data_offset": 2048, 00:20:04.929 "data_size": 63488 00:20:04.929 } 00:20:04.929 ] 00:20:04.929 }' 00:20:04.929 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.929 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.497 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:06.067 [2024-07-11 02:25:56.273467] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.067 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.636 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.636 "name": "Existed_Raid", 00:20:06.636 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:06.636 "strip_size_kb": 0, 00:20:06.636 "state": "configuring", 00:20:06.636 "raid_level": "raid1", 00:20:06.636 "superblock": true, 00:20:06.636 "num_base_bdevs": 3, 00:20:06.636 "num_base_bdevs_discovered": 1, 00:20:06.636 "num_base_bdevs_operational": 3, 00:20:06.636 "base_bdevs_list": [ 00:20:06.636 { 00:20:06.636 "name": "BaseBdev1", 00:20:06.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.636 "is_configured": false, 00:20:06.636 "data_offset": 0, 00:20:06.636 "data_size": 0 00:20:06.636 }, 00:20:06.636 { 00:20:06.636 "name": null, 00:20:06.636 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:06.636 "is_configured": false, 00:20:06.636 "data_offset": 2048, 00:20:06.636 "data_size": 63488 00:20:06.636 }, 00:20:06.636 { 00:20:06.636 "name": "BaseBdev3", 00:20:06.636 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:06.636 "is_configured": true, 00:20:06.636 "data_offset": 2048, 00:20:06.636 "data_size": 63488 00:20:06.636 } 00:20:06.636 ] 00:20:06.636 }' 00:20:06.636 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.636 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.575 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.575 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:07.575 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:07.575 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:07.835 [2024-07-11 02:25:58.203211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.835 BaseBdev1 00:20:07.835 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:07.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:07.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:07.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:07.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:07.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:07.835 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.404 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:09.034 [ 00:20:09.034 { 00:20:09.034 "name": "BaseBdev1", 00:20:09.034 "aliases": [ 00:20:09.034 "bd1fcc20-edc7-4446-a491-9d7fec6f4560" 00:20:09.034 ], 00:20:09.034 "product_name": "Malloc disk", 00:20:09.034 "block_size": 512, 00:20:09.034 "num_blocks": 65536, 00:20:09.034 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:09.034 "assigned_rate_limits": { 00:20:09.034 "rw_ios_per_sec": 0, 00:20:09.034 "rw_mbytes_per_sec": 0, 00:20:09.034 "r_mbytes_per_sec": 0, 00:20:09.034 "w_mbytes_per_sec": 0 00:20:09.034 }, 00:20:09.034 "claimed": true, 00:20:09.034 "claim_type": "exclusive_write", 00:20:09.034 "zoned": false, 00:20:09.034 "supported_io_types": { 00:20:09.034 "read": true, 00:20:09.034 "write": true, 00:20:09.034 "unmap": true, 00:20:09.034 "flush": true, 00:20:09.034 "reset": true, 00:20:09.034 "nvme_admin": false, 00:20:09.034 "nvme_io": false, 00:20:09.034 "nvme_io_md": false, 00:20:09.034 "write_zeroes": true, 00:20:09.034 "zcopy": true, 00:20:09.034 "get_zone_info": false, 00:20:09.034 "zone_management": false, 00:20:09.034 "zone_append": false, 00:20:09.034 "compare": false, 00:20:09.034 "compare_and_write": false, 00:20:09.034 "abort": true, 00:20:09.034 "seek_hole": false, 00:20:09.034 "seek_data": false, 00:20:09.034 "copy": true, 00:20:09.034 "nvme_iov_md": false 00:20:09.034 }, 00:20:09.034 "memory_domains": [ 00:20:09.034 { 00:20:09.034 "dma_device_id": "system", 00:20:09.034 "dma_device_type": 1 00:20:09.034 }, 00:20:09.034 { 00:20:09.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.034 "dma_device_type": 2 00:20:09.034 } 00:20:09.034 ], 00:20:09.034 "driver_specific": {} 00:20:09.034 } 00:20:09.034 ] 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.034 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.603 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.603 "name": "Existed_Raid", 00:20:09.603 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:09.603 "strip_size_kb": 0, 00:20:09.603 "state": "configuring", 00:20:09.603 "raid_level": "raid1", 00:20:09.603 "superblock": true, 00:20:09.603 "num_base_bdevs": 3, 00:20:09.603 "num_base_bdevs_discovered": 2, 00:20:09.603 "num_base_bdevs_operational": 3, 00:20:09.603 "base_bdevs_list": [ 00:20:09.603 { 00:20:09.603 "name": "BaseBdev1", 00:20:09.603 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:09.603 "is_configured": true, 00:20:09.603 "data_offset": 2048, 00:20:09.603 "data_size": 63488 00:20:09.603 }, 00:20:09.603 { 00:20:09.603 "name": null, 00:20:09.603 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:09.603 "is_configured": false, 00:20:09.603 "data_offset": 2048, 00:20:09.603 "data_size": 63488 00:20:09.603 }, 00:20:09.603 { 00:20:09.603 "name": "BaseBdev3", 00:20:09.603 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:09.603 "is_configured": true, 00:20:09.603 "data_offset": 2048, 00:20:09.603 "data_size": 63488 00:20:09.603 } 00:20:09.603 ] 00:20:09.603 }' 00:20:09.603 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.603 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.171 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.171 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:10.171 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:10.171 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:10.430 [2024-07-11 02:26:00.737937] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.430 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.688 "name": "Existed_Raid", 00:20:10.689 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:10.689 "strip_size_kb": 0, 00:20:10.689 "state": "configuring", 00:20:10.689 "raid_level": "raid1", 00:20:10.689 "superblock": true, 00:20:10.689 "num_base_bdevs": 3, 00:20:10.689 "num_base_bdevs_discovered": 1, 00:20:10.689 "num_base_bdevs_operational": 3, 00:20:10.689 "base_bdevs_list": [ 00:20:10.689 { 00:20:10.689 "name": "BaseBdev1", 00:20:10.689 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:10.689 "is_configured": true, 00:20:10.689 "data_offset": 2048, 00:20:10.689 "data_size": 63488 00:20:10.689 }, 00:20:10.689 { 00:20:10.689 "name": null, 00:20:10.689 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:10.689 "is_configured": false, 00:20:10.689 "data_offset": 2048, 00:20:10.689 "data_size": 63488 00:20:10.689 }, 00:20:10.689 { 00:20:10.689 "name": null, 00:20:10.689 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:10.689 "is_configured": false, 00:20:10.689 "data_offset": 2048, 00:20:10.689 "data_size": 63488 00:20:10.689 } 00:20:10.689 ] 00:20:10.689 }' 00:20:10.689 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.689 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.257 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:11.257 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:11.516 [2024-07-11 02:26:01.860947] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.516 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.775 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.775 "name": "Existed_Raid", 00:20:11.775 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:11.775 "strip_size_kb": 0, 00:20:11.775 "state": "configuring", 00:20:11.775 "raid_level": "raid1", 00:20:11.775 "superblock": true, 00:20:11.775 "num_base_bdevs": 3, 00:20:11.775 "num_base_bdevs_discovered": 2, 00:20:11.775 "num_base_bdevs_operational": 3, 00:20:11.775 "base_bdevs_list": [ 00:20:11.775 { 00:20:11.775 "name": "BaseBdev1", 00:20:11.775 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:11.775 "is_configured": true, 00:20:11.775 "data_offset": 2048, 00:20:11.775 "data_size": 63488 00:20:11.775 }, 00:20:11.775 { 00:20:11.775 "name": null, 00:20:11.775 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:11.775 "is_configured": false, 00:20:11.775 "data_offset": 2048, 00:20:11.775 "data_size": 63488 00:20:11.775 }, 00:20:11.775 { 00:20:11.775 "name": "BaseBdev3", 00:20:11.775 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:11.775 "is_configured": true, 00:20:11.775 "data_offset": 2048, 00:20:11.775 "data_size": 63488 00:20:11.775 } 00:20:11.775 ] 00:20:11.775 }' 00:20:11.775 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.775 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.343 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.343 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:12.602 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:12.602 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:12.861 [2024-07-11 02:26:03.156555] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.861 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.120 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.120 "name": "Existed_Raid", 00:20:13.120 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:13.120 "strip_size_kb": 0, 00:20:13.120 "state": "configuring", 00:20:13.120 "raid_level": "raid1", 00:20:13.120 "superblock": true, 00:20:13.120 "num_base_bdevs": 3, 00:20:13.120 "num_base_bdevs_discovered": 1, 00:20:13.120 "num_base_bdevs_operational": 3, 00:20:13.120 "base_bdevs_list": [ 00:20:13.120 { 00:20:13.120 "name": null, 00:20:13.120 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:13.120 "is_configured": false, 00:20:13.120 "data_offset": 2048, 00:20:13.120 "data_size": 63488 00:20:13.120 }, 00:20:13.120 { 00:20:13.120 "name": null, 00:20:13.120 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:13.120 "is_configured": false, 00:20:13.120 "data_offset": 2048, 00:20:13.120 "data_size": 63488 00:20:13.120 }, 00:20:13.120 { 00:20:13.120 "name": "BaseBdev3", 00:20:13.120 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:13.120 "is_configured": true, 00:20:13.120 "data_offset": 2048, 00:20:13.120 "data_size": 63488 00:20:13.120 } 00:20:13.120 ] 00:20:13.120 }' 00:20:13.120 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.120 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.688 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.688 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:13.947 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:13.947 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:14.207 [2024-07-11 02:26:04.466396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.207 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.466 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.467 "name": "Existed_Raid", 00:20:14.467 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:14.467 "strip_size_kb": 0, 00:20:14.467 "state": "configuring", 00:20:14.467 "raid_level": "raid1", 00:20:14.467 "superblock": true, 00:20:14.467 "num_base_bdevs": 3, 00:20:14.467 "num_base_bdevs_discovered": 2, 00:20:14.467 "num_base_bdevs_operational": 3, 00:20:14.467 "base_bdevs_list": [ 00:20:14.467 { 00:20:14.467 "name": null, 00:20:14.467 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:14.467 "is_configured": false, 00:20:14.467 "data_offset": 2048, 00:20:14.467 "data_size": 63488 00:20:14.467 }, 00:20:14.467 { 00:20:14.467 "name": "BaseBdev2", 00:20:14.467 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:14.467 "is_configured": true, 00:20:14.467 "data_offset": 2048, 00:20:14.467 "data_size": 63488 00:20:14.467 }, 00:20:14.467 { 00:20:14.467 "name": "BaseBdev3", 00:20:14.467 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:14.467 "is_configured": true, 00:20:14.467 "data_offset": 2048, 00:20:14.467 "data_size": 63488 00:20:14.467 } 00:20:14.467 ] 00:20:14.467 }' 00:20:14.467 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.467 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.034 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.034 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:15.293 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:15.293 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.293 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:15.552 02:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bd1fcc20-edc7-4446-a491-9d7fec6f4560 00:20:15.811 [2024-07-11 02:26:06.119305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:15.811 [2024-07-11 02:26:06.119465] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xebfbc0 00:20:15.811 [2024-07-11 02:26:06.119479] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:15.811 [2024-07-11 02:26:06.119653] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xebfee0 00:20:15.811 [2024-07-11 02:26:06.119796] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xebfbc0 00:20:15.811 [2024-07-11 02:26:06.119807] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xebfbc0 00:20:15.811 [2024-07-11 02:26:06.119903] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.811 NewBaseBdev 00:20:15.811 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:15.811 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:15.811 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:15.811 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:15.811 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:15.811 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:15.811 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.071 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:16.330 [ 00:20:16.330 { 00:20:16.330 "name": "NewBaseBdev", 00:20:16.330 "aliases": [ 00:20:16.330 "bd1fcc20-edc7-4446-a491-9d7fec6f4560" 00:20:16.330 ], 00:20:16.330 "product_name": "Malloc disk", 00:20:16.330 "block_size": 512, 00:20:16.330 "num_blocks": 65536, 00:20:16.330 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:16.330 "assigned_rate_limits": { 00:20:16.330 "rw_ios_per_sec": 0, 00:20:16.330 "rw_mbytes_per_sec": 0, 00:20:16.330 "r_mbytes_per_sec": 0, 00:20:16.330 "w_mbytes_per_sec": 0 00:20:16.330 }, 00:20:16.330 "claimed": true, 00:20:16.330 "claim_type": "exclusive_write", 00:20:16.330 "zoned": false, 00:20:16.330 "supported_io_types": { 00:20:16.330 "read": true, 00:20:16.330 "write": true, 00:20:16.330 "unmap": true, 00:20:16.330 "flush": true, 00:20:16.330 "reset": true, 00:20:16.330 "nvme_admin": false, 00:20:16.330 "nvme_io": false, 00:20:16.330 "nvme_io_md": false, 00:20:16.330 "write_zeroes": true, 00:20:16.330 "zcopy": true, 00:20:16.330 "get_zone_info": false, 00:20:16.330 "zone_management": false, 00:20:16.330 "zone_append": false, 00:20:16.330 "compare": false, 00:20:16.330 "compare_and_write": false, 00:20:16.330 "abort": true, 00:20:16.330 "seek_hole": false, 00:20:16.330 "seek_data": false, 00:20:16.330 "copy": true, 00:20:16.330 "nvme_iov_md": false 00:20:16.330 }, 00:20:16.330 "memory_domains": [ 00:20:16.330 { 00:20:16.330 "dma_device_id": "system", 00:20:16.330 "dma_device_type": 1 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.330 "dma_device_type": 2 00:20:16.330 } 00:20:16.330 ], 00:20:16.330 "driver_specific": {} 00:20:16.330 } 00:20:16.330 ] 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.330 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.590 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.590 "name": "Existed_Raid", 00:20:16.590 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:16.590 "strip_size_kb": 0, 00:20:16.590 "state": "online", 00:20:16.590 "raid_level": "raid1", 00:20:16.590 "superblock": true, 00:20:16.590 "num_base_bdevs": 3, 00:20:16.590 "num_base_bdevs_discovered": 3, 00:20:16.590 "num_base_bdevs_operational": 3, 00:20:16.590 "base_bdevs_list": [ 00:20:16.590 { 00:20:16.590 "name": "NewBaseBdev", 00:20:16.590 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:16.590 "is_configured": true, 00:20:16.590 "data_offset": 2048, 00:20:16.590 "data_size": 63488 00:20:16.590 }, 00:20:16.590 { 00:20:16.590 "name": "BaseBdev2", 00:20:16.590 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:16.590 "is_configured": true, 00:20:16.590 "data_offset": 2048, 00:20:16.590 "data_size": 63488 00:20:16.590 }, 00:20:16.590 { 00:20:16.590 "name": "BaseBdev3", 00:20:16.590 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:16.590 "is_configured": true, 00:20:16.590 "data_offset": 2048, 00:20:16.590 "data_size": 63488 00:20:16.590 } 00:20:16.590 ] 00:20:16.590 }' 00:20:16.590 02:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.590 02:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.273 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:17.274 [2024-07-11 02:26:07.663721] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:17.274 "name": "Existed_Raid", 00:20:17.274 "aliases": [ 00:20:17.274 "6de1325a-dffe-458b-8651-f80ff1f34dc3" 00:20:17.274 ], 00:20:17.274 "product_name": "Raid Volume", 00:20:17.274 "block_size": 512, 00:20:17.274 "num_blocks": 63488, 00:20:17.274 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:17.274 "assigned_rate_limits": { 00:20:17.274 "rw_ios_per_sec": 0, 00:20:17.274 "rw_mbytes_per_sec": 0, 00:20:17.274 "r_mbytes_per_sec": 0, 00:20:17.274 "w_mbytes_per_sec": 0 00:20:17.274 }, 00:20:17.274 "claimed": false, 00:20:17.274 "zoned": false, 00:20:17.274 "supported_io_types": { 00:20:17.274 "read": true, 00:20:17.274 "write": true, 00:20:17.274 "unmap": false, 00:20:17.274 "flush": false, 00:20:17.274 "reset": true, 00:20:17.274 "nvme_admin": false, 00:20:17.274 "nvme_io": false, 00:20:17.274 "nvme_io_md": false, 00:20:17.274 "write_zeroes": true, 00:20:17.274 "zcopy": false, 00:20:17.274 "get_zone_info": false, 00:20:17.274 "zone_management": false, 00:20:17.274 "zone_append": false, 00:20:17.274 "compare": false, 00:20:17.274 "compare_and_write": false, 00:20:17.274 "abort": false, 00:20:17.274 "seek_hole": false, 00:20:17.274 "seek_data": false, 00:20:17.274 "copy": false, 00:20:17.274 "nvme_iov_md": false 00:20:17.274 }, 00:20:17.274 "memory_domains": [ 00:20:17.274 { 00:20:17.274 "dma_device_id": "system", 00:20:17.274 "dma_device_type": 1 00:20:17.274 }, 00:20:17.274 { 00:20:17.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.274 "dma_device_type": 2 00:20:17.274 }, 00:20:17.274 { 00:20:17.274 "dma_device_id": "system", 00:20:17.274 "dma_device_type": 1 00:20:17.274 }, 00:20:17.274 { 00:20:17.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.274 "dma_device_type": 2 00:20:17.274 }, 00:20:17.274 { 00:20:17.274 "dma_device_id": "system", 00:20:17.274 "dma_device_type": 1 00:20:17.274 }, 00:20:17.274 { 00:20:17.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.274 "dma_device_type": 2 00:20:17.274 } 00:20:17.274 ], 00:20:17.274 "driver_specific": { 00:20:17.274 "raid": { 00:20:17.274 "uuid": "6de1325a-dffe-458b-8651-f80ff1f34dc3", 00:20:17.274 "strip_size_kb": 0, 00:20:17.274 "state": "online", 00:20:17.274 "raid_level": "raid1", 00:20:17.274 "superblock": true, 00:20:17.274 "num_base_bdevs": 3, 00:20:17.274 "num_base_bdevs_discovered": 3, 00:20:17.274 "num_base_bdevs_operational": 3, 00:20:17.274 "base_bdevs_list": [ 00:20:17.274 { 00:20:17.274 "name": "NewBaseBdev", 00:20:17.274 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:17.274 "is_configured": true, 00:20:17.274 "data_offset": 2048, 00:20:17.274 "data_size": 63488 00:20:17.274 }, 00:20:17.274 { 00:20:17.274 "name": "BaseBdev2", 00:20:17.274 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:17.274 "is_configured": true, 00:20:17.274 "data_offset": 2048, 00:20:17.274 "data_size": 63488 00:20:17.274 }, 00:20:17.274 { 00:20:17.274 "name": "BaseBdev3", 00:20:17.274 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:17.274 "is_configured": true, 00:20:17.274 "data_offset": 2048, 00:20:17.274 "data_size": 63488 00:20:17.274 } 00:20:17.274 ] 00:20:17.274 } 00:20:17.274 } 00:20:17.274 }' 00:20:17.274 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:17.537 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:17.537 BaseBdev2 00:20:17.537 BaseBdev3' 00:20:17.537 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:17.537 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:17.537 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:17.796 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:17.796 "name": "NewBaseBdev", 00:20:17.796 "aliases": [ 00:20:17.796 "bd1fcc20-edc7-4446-a491-9d7fec6f4560" 00:20:17.796 ], 00:20:17.796 "product_name": "Malloc disk", 00:20:17.796 "block_size": 512, 00:20:17.796 "num_blocks": 65536, 00:20:17.796 "uuid": "bd1fcc20-edc7-4446-a491-9d7fec6f4560", 00:20:17.796 "assigned_rate_limits": { 00:20:17.796 "rw_ios_per_sec": 0, 00:20:17.796 "rw_mbytes_per_sec": 0, 00:20:17.796 "r_mbytes_per_sec": 0, 00:20:17.796 "w_mbytes_per_sec": 0 00:20:17.796 }, 00:20:17.796 "claimed": true, 00:20:17.796 "claim_type": "exclusive_write", 00:20:17.796 "zoned": false, 00:20:17.796 "supported_io_types": { 00:20:17.796 "read": true, 00:20:17.796 "write": true, 00:20:17.796 "unmap": true, 00:20:17.796 "flush": true, 00:20:17.796 "reset": true, 00:20:17.796 "nvme_admin": false, 00:20:17.796 "nvme_io": false, 00:20:17.796 "nvme_io_md": false, 00:20:17.796 "write_zeroes": true, 00:20:17.796 "zcopy": true, 00:20:17.796 "get_zone_info": false, 00:20:17.796 "zone_management": false, 00:20:17.796 "zone_append": false, 00:20:17.796 "compare": false, 00:20:17.796 "compare_and_write": false, 00:20:17.796 "abort": true, 00:20:17.796 "seek_hole": false, 00:20:17.796 "seek_data": false, 00:20:17.796 "copy": true, 00:20:17.796 "nvme_iov_md": false 00:20:17.796 }, 00:20:17.796 "memory_domains": [ 00:20:17.796 { 00:20:17.796 "dma_device_id": "system", 00:20:17.796 "dma_device_type": 1 00:20:17.797 }, 00:20:17.797 { 00:20:17.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.797 "dma_device_type": 2 00:20:17.797 } 00:20:17.797 ], 00:20:17.797 "driver_specific": {} 00:20:17.797 }' 00:20:17.797 02:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.797 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.797 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:17.797 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.797 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.797 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:17.797 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:18.056 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:18.316 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:18.316 "name": "BaseBdev2", 00:20:18.316 "aliases": [ 00:20:18.316 "b8f49e9c-f107-4768-a2fc-99b87e5117a6" 00:20:18.316 ], 00:20:18.316 "product_name": "Malloc disk", 00:20:18.316 "block_size": 512, 00:20:18.316 "num_blocks": 65536, 00:20:18.316 "uuid": "b8f49e9c-f107-4768-a2fc-99b87e5117a6", 00:20:18.316 "assigned_rate_limits": { 00:20:18.316 "rw_ios_per_sec": 0, 00:20:18.316 "rw_mbytes_per_sec": 0, 00:20:18.316 "r_mbytes_per_sec": 0, 00:20:18.316 "w_mbytes_per_sec": 0 00:20:18.316 }, 00:20:18.316 "claimed": true, 00:20:18.316 "claim_type": "exclusive_write", 00:20:18.316 "zoned": false, 00:20:18.316 "supported_io_types": { 00:20:18.316 "read": true, 00:20:18.316 "write": true, 00:20:18.316 "unmap": true, 00:20:18.316 "flush": true, 00:20:18.316 "reset": true, 00:20:18.316 "nvme_admin": false, 00:20:18.316 "nvme_io": false, 00:20:18.316 "nvme_io_md": false, 00:20:18.316 "write_zeroes": true, 00:20:18.316 "zcopy": true, 00:20:18.316 "get_zone_info": false, 00:20:18.316 "zone_management": false, 00:20:18.316 "zone_append": false, 00:20:18.316 "compare": false, 00:20:18.316 "compare_and_write": false, 00:20:18.316 "abort": true, 00:20:18.316 "seek_hole": false, 00:20:18.316 "seek_data": false, 00:20:18.316 "copy": true, 00:20:18.316 "nvme_iov_md": false 00:20:18.316 }, 00:20:18.316 "memory_domains": [ 00:20:18.316 { 00:20:18.316 "dma_device_id": "system", 00:20:18.316 "dma_device_type": 1 00:20:18.316 }, 00:20:18.316 { 00:20:18.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.316 "dma_device_type": 2 00:20:18.316 } 00:20:18.316 ], 00:20:18.316 "driver_specific": {} 00:20:18.316 }' 00:20:18.316 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.316 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.316 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.316 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:18.576 02:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:18.836 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:18.836 "name": "BaseBdev3", 00:20:18.836 "aliases": [ 00:20:18.836 "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea" 00:20:18.836 ], 00:20:18.836 "product_name": "Malloc disk", 00:20:18.836 "block_size": 512, 00:20:18.836 "num_blocks": 65536, 00:20:18.836 "uuid": "d2e6c9cc-3f8a-445a-ba08-6f70fee32dea", 00:20:18.836 "assigned_rate_limits": { 00:20:18.836 "rw_ios_per_sec": 0, 00:20:18.836 "rw_mbytes_per_sec": 0, 00:20:18.836 "r_mbytes_per_sec": 0, 00:20:18.836 "w_mbytes_per_sec": 0 00:20:18.836 }, 00:20:18.836 "claimed": true, 00:20:18.836 "claim_type": "exclusive_write", 00:20:18.836 "zoned": false, 00:20:18.836 "supported_io_types": { 00:20:18.836 "read": true, 00:20:18.836 "write": true, 00:20:18.836 "unmap": true, 00:20:18.836 "flush": true, 00:20:18.836 "reset": true, 00:20:18.836 "nvme_admin": false, 00:20:18.836 "nvme_io": false, 00:20:18.836 "nvme_io_md": false, 00:20:18.836 "write_zeroes": true, 00:20:18.836 "zcopy": true, 00:20:18.836 "get_zone_info": false, 00:20:18.836 "zone_management": false, 00:20:18.836 "zone_append": false, 00:20:18.836 "compare": false, 00:20:18.836 "compare_and_write": false, 00:20:18.836 "abort": true, 00:20:18.836 "seek_hole": false, 00:20:18.836 "seek_data": false, 00:20:18.836 "copy": true, 00:20:18.836 "nvme_iov_md": false 00:20:18.836 }, 00:20:18.836 "memory_domains": [ 00:20:18.836 { 00:20:18.836 "dma_device_id": "system", 00:20:18.836 "dma_device_type": 1 00:20:18.836 }, 00:20:18.836 { 00:20:18.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.836 "dma_device_type": 2 00:20:18.836 } 00:20:18.836 ], 00:20:18.836 "driver_specific": {} 00:20:18.836 }' 00:20:18.836 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.836 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.836 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.836 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:19.096 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:19.356 [2024-07-11 02:26:09.704907] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:19.356 [2024-07-11 02:26:09.704942] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.356 [2024-07-11 02:26:09.704996] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.356 [2024-07-11 02:26:09.705276] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.356 [2024-07-11 02:26:09.705290] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xebfbc0 name Existed_Raid, state offline 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1944813 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1944813 ']' 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1944813 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1944813 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1944813' 00:20:19.356 killing process with pid 1944813 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1944813 00:20:19.356 [2024-07-11 02:26:09.769953] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.356 02:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1944813 00:20:19.615 [2024-07-11 02:26:09.829495] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:19.876 02:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:19.876 00:20:19.876 real 0m32.416s 00:20:19.876 user 0m59.326s 00:20:19.876 sys 0m5.752s 00:20:19.876 02:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.876 02:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.876 ************************************ 00:20:19.876 END TEST raid_state_function_test_sb 00:20:19.876 ************************************ 00:20:19.876 02:26:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:19.876 02:26:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:19.876 02:26:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:19.876 02:26:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.876 02:26:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:19.876 ************************************ 00:20:19.876 START TEST raid_superblock_test 00:20:19.876 ************************************ 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1949587 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1949587 /var/tmp/spdk-raid.sock 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1949587 ']' 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:19.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.876 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.136 [2024-07-11 02:26:10.343540] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:20.136 [2024-07-11 02:26:10.343612] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949587 ] 00:20:20.136 [2024-07-11 02:26:10.483511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.136 [2024-07-11 02:26:10.536364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.396 [2024-07-11 02:26:10.594395] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.396 [2024-07-11 02:26:10.594421] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:20.964 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:21.223 malloc1 00:20:21.223 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:21.223 [2024-07-11 02:26:11.565994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:21.223 [2024-07-11 02:26:11.566042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.223 [2024-07-11 02:26:11.566060] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2490de0 00:20:21.223 [2024-07-11 02:26:11.566072] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.223 [2024-07-11 02:26:11.567693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.223 [2024-07-11 02:26:11.567726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:21.223 pt1 00:20:21.223 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:21.223 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:21.223 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:21.223 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:21.223 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:21.223 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.224 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.224 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.224 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:21.483 malloc2 00:20:21.483 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:21.742 [2024-07-11 02:26:11.935658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:21.742 [2024-07-11 02:26:11.935701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.742 [2024-07-11 02:26:11.935718] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2488380 00:20:21.742 [2024-07-11 02:26:11.935729] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.742 [2024-07-11 02:26:11.937052] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.742 [2024-07-11 02:26:11.937078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:21.742 pt2 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.742 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:22.001 malloc3 00:20:22.001 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:22.263 [2024-07-11 02:26:12.449715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:22.263 [2024-07-11 02:26:12.449768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.263 [2024-07-11 02:26:12.449786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248afb0 00:20:22.263 [2024-07-11 02:26:12.449798] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.263 [2024-07-11 02:26:12.451247] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.263 [2024-07-11 02:26:12.451275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:22.263 pt3 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:22.263 [2024-07-11 02:26:12.638231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:22.263 [2024-07-11 02:26:12.639411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:22.263 [2024-07-11 02:26:12.639462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:22.263 [2024-07-11 02:26:12.639610] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x248d2d0 00:20:22.263 [2024-07-11 02:26:12.639621] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:22.263 [2024-07-11 02:26:12.639815] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2488d40 00:20:22.263 [2024-07-11 02:26:12.639958] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x248d2d0 00:20:22.263 [2024-07-11 02:26:12.639968] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x248d2d0 00:20:22.263 [2024-07-11 02:26:12.640062] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.263 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.264 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.264 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.264 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.522 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.522 "name": "raid_bdev1", 00:20:22.522 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:22.522 "strip_size_kb": 0, 00:20:22.522 "state": "online", 00:20:22.522 "raid_level": "raid1", 00:20:22.522 "superblock": true, 00:20:22.522 "num_base_bdevs": 3, 00:20:22.522 "num_base_bdevs_discovered": 3, 00:20:22.522 "num_base_bdevs_operational": 3, 00:20:22.522 "base_bdevs_list": [ 00:20:22.522 { 00:20:22.522 "name": "pt1", 00:20:22.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.522 "is_configured": true, 00:20:22.522 "data_offset": 2048, 00:20:22.522 "data_size": 63488 00:20:22.522 }, 00:20:22.522 { 00:20:22.522 "name": "pt2", 00:20:22.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.522 "is_configured": true, 00:20:22.522 "data_offset": 2048, 00:20:22.522 "data_size": 63488 00:20:22.522 }, 00:20:22.522 { 00:20:22.522 "name": "pt3", 00:20:22.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.522 "is_configured": true, 00:20:22.522 "data_offset": 2048, 00:20:22.522 "data_size": 63488 00:20:22.522 } 00:20:22.522 ] 00:20:22.522 }' 00:20:22.522 02:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.522 02:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:23.091 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:23.350 [2024-07-11 02:26:13.637133] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.350 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:23.350 "name": "raid_bdev1", 00:20:23.350 "aliases": [ 00:20:23.350 "112753e1-041a-49cd-b692-ed16696af937" 00:20:23.350 ], 00:20:23.350 "product_name": "Raid Volume", 00:20:23.350 "block_size": 512, 00:20:23.350 "num_blocks": 63488, 00:20:23.350 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:23.350 "assigned_rate_limits": { 00:20:23.350 "rw_ios_per_sec": 0, 00:20:23.350 "rw_mbytes_per_sec": 0, 00:20:23.350 "r_mbytes_per_sec": 0, 00:20:23.350 "w_mbytes_per_sec": 0 00:20:23.350 }, 00:20:23.350 "claimed": false, 00:20:23.350 "zoned": false, 00:20:23.350 "supported_io_types": { 00:20:23.350 "read": true, 00:20:23.350 "write": true, 00:20:23.350 "unmap": false, 00:20:23.350 "flush": false, 00:20:23.350 "reset": true, 00:20:23.350 "nvme_admin": false, 00:20:23.350 "nvme_io": false, 00:20:23.350 "nvme_io_md": false, 00:20:23.350 "write_zeroes": true, 00:20:23.350 "zcopy": false, 00:20:23.350 "get_zone_info": false, 00:20:23.350 "zone_management": false, 00:20:23.350 "zone_append": false, 00:20:23.350 "compare": false, 00:20:23.350 "compare_and_write": false, 00:20:23.350 "abort": false, 00:20:23.350 "seek_hole": false, 00:20:23.350 "seek_data": false, 00:20:23.350 "copy": false, 00:20:23.350 "nvme_iov_md": false 00:20:23.350 }, 00:20:23.350 "memory_domains": [ 00:20:23.350 { 00:20:23.350 "dma_device_id": "system", 00:20:23.350 "dma_device_type": 1 00:20:23.350 }, 00:20:23.350 { 00:20:23.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.350 "dma_device_type": 2 00:20:23.350 }, 00:20:23.350 { 00:20:23.350 "dma_device_id": "system", 00:20:23.350 "dma_device_type": 1 00:20:23.350 }, 00:20:23.350 { 00:20:23.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.350 "dma_device_type": 2 00:20:23.350 }, 00:20:23.350 { 00:20:23.350 "dma_device_id": "system", 00:20:23.350 "dma_device_type": 1 00:20:23.350 }, 00:20:23.350 { 00:20:23.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.350 "dma_device_type": 2 00:20:23.350 } 00:20:23.350 ], 00:20:23.350 "driver_specific": { 00:20:23.350 "raid": { 00:20:23.350 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:23.350 "strip_size_kb": 0, 00:20:23.350 "state": "online", 00:20:23.350 "raid_level": "raid1", 00:20:23.350 "superblock": true, 00:20:23.350 "num_base_bdevs": 3, 00:20:23.350 "num_base_bdevs_discovered": 3, 00:20:23.350 "num_base_bdevs_operational": 3, 00:20:23.350 "base_bdevs_list": [ 00:20:23.350 { 00:20:23.350 "name": "pt1", 00:20:23.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.350 "is_configured": true, 00:20:23.350 "data_offset": 2048, 00:20:23.351 "data_size": 63488 00:20:23.351 }, 00:20:23.351 { 00:20:23.351 "name": "pt2", 00:20:23.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.351 "is_configured": true, 00:20:23.351 "data_offset": 2048, 00:20:23.351 "data_size": 63488 00:20:23.351 }, 00:20:23.351 { 00:20:23.351 "name": "pt3", 00:20:23.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:23.351 "is_configured": true, 00:20:23.351 "data_offset": 2048, 00:20:23.351 "data_size": 63488 00:20:23.351 } 00:20:23.351 ] 00:20:23.351 } 00:20:23.351 } 00:20:23.351 }' 00:20:23.351 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:23.351 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:23.351 pt2 00:20:23.351 pt3' 00:20:23.351 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:23.351 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:23.351 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:23.609 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:23.609 "name": "pt1", 00:20:23.609 "aliases": [ 00:20:23.609 "00000000-0000-0000-0000-000000000001" 00:20:23.609 ], 00:20:23.609 "product_name": "passthru", 00:20:23.609 "block_size": 512, 00:20:23.609 "num_blocks": 65536, 00:20:23.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.609 "assigned_rate_limits": { 00:20:23.609 "rw_ios_per_sec": 0, 00:20:23.609 "rw_mbytes_per_sec": 0, 00:20:23.609 "r_mbytes_per_sec": 0, 00:20:23.609 "w_mbytes_per_sec": 0 00:20:23.609 }, 00:20:23.609 "claimed": true, 00:20:23.609 "claim_type": "exclusive_write", 00:20:23.609 "zoned": false, 00:20:23.609 "supported_io_types": { 00:20:23.609 "read": true, 00:20:23.609 "write": true, 00:20:23.609 "unmap": true, 00:20:23.609 "flush": true, 00:20:23.609 "reset": true, 00:20:23.609 "nvme_admin": false, 00:20:23.609 "nvme_io": false, 00:20:23.609 "nvme_io_md": false, 00:20:23.609 "write_zeroes": true, 00:20:23.609 "zcopy": true, 00:20:23.609 "get_zone_info": false, 00:20:23.609 "zone_management": false, 00:20:23.609 "zone_append": false, 00:20:23.609 "compare": false, 00:20:23.609 "compare_and_write": false, 00:20:23.609 "abort": true, 00:20:23.609 "seek_hole": false, 00:20:23.609 "seek_data": false, 00:20:23.609 "copy": true, 00:20:23.609 "nvme_iov_md": false 00:20:23.609 }, 00:20:23.609 "memory_domains": [ 00:20:23.609 { 00:20:23.609 "dma_device_id": "system", 00:20:23.609 "dma_device_type": 1 00:20:23.610 }, 00:20:23.610 { 00:20:23.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.610 "dma_device_type": 2 00:20:23.610 } 00:20:23.610 ], 00:20:23.610 "driver_specific": { 00:20:23.610 "passthru": { 00:20:23.610 "name": "pt1", 00:20:23.610 "base_bdev_name": "malloc1" 00:20:23.610 } 00:20:23.610 } 00:20:23.610 }' 00:20:23.610 02:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.610 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.869 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.128 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.128 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:24.128 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:24.128 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:24.128 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:24.128 "name": "pt2", 00:20:24.128 "aliases": [ 00:20:24.128 "00000000-0000-0000-0000-000000000002" 00:20:24.128 ], 00:20:24.128 "product_name": "passthru", 00:20:24.128 "block_size": 512, 00:20:24.128 "num_blocks": 65536, 00:20:24.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.128 "assigned_rate_limits": { 00:20:24.128 "rw_ios_per_sec": 0, 00:20:24.128 "rw_mbytes_per_sec": 0, 00:20:24.128 "r_mbytes_per_sec": 0, 00:20:24.128 "w_mbytes_per_sec": 0 00:20:24.128 }, 00:20:24.128 "claimed": true, 00:20:24.128 "claim_type": "exclusive_write", 00:20:24.128 "zoned": false, 00:20:24.128 "supported_io_types": { 00:20:24.128 "read": true, 00:20:24.128 "write": true, 00:20:24.128 "unmap": true, 00:20:24.128 "flush": true, 00:20:24.128 "reset": true, 00:20:24.128 "nvme_admin": false, 00:20:24.128 "nvme_io": false, 00:20:24.128 "nvme_io_md": false, 00:20:24.128 "write_zeroes": true, 00:20:24.128 "zcopy": true, 00:20:24.128 "get_zone_info": false, 00:20:24.128 "zone_management": false, 00:20:24.128 "zone_append": false, 00:20:24.128 "compare": false, 00:20:24.128 "compare_and_write": false, 00:20:24.128 "abort": true, 00:20:24.128 "seek_hole": false, 00:20:24.128 "seek_data": false, 00:20:24.128 "copy": true, 00:20:24.128 "nvme_iov_md": false 00:20:24.128 }, 00:20:24.128 "memory_domains": [ 00:20:24.128 { 00:20:24.128 "dma_device_id": "system", 00:20:24.128 "dma_device_type": 1 00:20:24.128 }, 00:20:24.128 { 00:20:24.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.128 "dma_device_type": 2 00:20:24.128 } 00:20:24.128 ], 00:20:24.128 "driver_specific": { 00:20:24.128 "passthru": { 00:20:24.128 "name": "pt2", 00:20:24.128 "base_bdev_name": "malloc2" 00:20:24.128 } 00:20:24.128 } 00:20:24.128 }' 00:20:24.128 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.387 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.387 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:24.387 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.387 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.387 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:24.387 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.387 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.646 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:24.646 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.646 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.646 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.646 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:24.646 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:24.646 02:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:24.906 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:24.906 "name": "pt3", 00:20:24.906 "aliases": [ 00:20:24.906 "00000000-0000-0000-0000-000000000003" 00:20:24.906 ], 00:20:24.906 "product_name": "passthru", 00:20:24.906 "block_size": 512, 00:20:24.906 "num_blocks": 65536, 00:20:24.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.906 "assigned_rate_limits": { 00:20:24.906 "rw_ios_per_sec": 0, 00:20:24.906 "rw_mbytes_per_sec": 0, 00:20:24.906 "r_mbytes_per_sec": 0, 00:20:24.906 "w_mbytes_per_sec": 0 00:20:24.906 }, 00:20:24.906 "claimed": true, 00:20:24.906 "claim_type": "exclusive_write", 00:20:24.906 "zoned": false, 00:20:24.906 "supported_io_types": { 00:20:24.906 "read": true, 00:20:24.906 "write": true, 00:20:24.906 "unmap": true, 00:20:24.906 "flush": true, 00:20:24.906 "reset": true, 00:20:24.906 "nvme_admin": false, 00:20:24.906 "nvme_io": false, 00:20:24.906 "nvme_io_md": false, 00:20:24.906 "write_zeroes": true, 00:20:24.906 "zcopy": true, 00:20:24.906 "get_zone_info": false, 00:20:24.906 "zone_management": false, 00:20:24.906 "zone_append": false, 00:20:24.906 "compare": false, 00:20:24.906 "compare_and_write": false, 00:20:24.906 "abort": true, 00:20:24.906 "seek_hole": false, 00:20:24.906 "seek_data": false, 00:20:24.906 "copy": true, 00:20:24.906 "nvme_iov_md": false 00:20:24.906 }, 00:20:24.906 "memory_domains": [ 00:20:24.906 { 00:20:24.906 "dma_device_id": "system", 00:20:24.906 "dma_device_type": 1 00:20:24.906 }, 00:20:24.906 { 00:20:24.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.906 "dma_device_type": 2 00:20:24.906 } 00:20:24.906 ], 00:20:24.906 "driver_specific": { 00:20:24.906 "passthru": { 00:20:24.906 "name": "pt3", 00:20:24.906 "base_bdev_name": "malloc3" 00:20:24.906 } 00:20:24.906 } 00:20:24.906 }' 00:20:24.906 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.906 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.906 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:24.906 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:25.165 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:25.424 [2024-07-11 02:26:15.815058] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.424 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=112753e1-041a-49cd-b692-ed16696af937 00:20:25.424 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 112753e1-041a-49cd-b692-ed16696af937 ']' 00:20:25.424 02:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:25.682 [2024-07-11 02:26:16.067446] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.682 [2024-07-11 02:26:16.067462] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.682 [2024-07-11 02:26:16.067511] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.682 [2024-07-11 02:26:16.067578] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.682 [2024-07-11 02:26:16.067590] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248d2d0 name raid_bdev1, state offline 00:20:25.682 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.682 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:25.942 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:25.942 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:25.942 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:25.942 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:26.201 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:26.201 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:26.461 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:26.461 02:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:26.721 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:26.721 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:26.981 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:26.982 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:27.241 [2024-07-11 02:26:17.555317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:27.241 [2024-07-11 02:26:17.556680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:27.241 [2024-07-11 02:26:17.556723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:27.241 [2024-07-11 02:26:17.556780] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:27.241 [2024-07-11 02:26:17.556821] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:27.241 [2024-07-11 02:26:17.556844] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:27.241 [2024-07-11 02:26:17.556862] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:27.241 [2024-07-11 02:26:17.556872] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2491f10 name raid_bdev1, state configuring 00:20:27.241 request: 00:20:27.241 { 00:20:27.241 "name": "raid_bdev1", 00:20:27.241 "raid_level": "raid1", 00:20:27.241 "base_bdevs": [ 00:20:27.241 "malloc1", 00:20:27.241 "malloc2", 00:20:27.241 "malloc3" 00:20:27.241 ], 00:20:27.241 "superblock": false, 00:20:27.241 "method": "bdev_raid_create", 00:20:27.241 "req_id": 1 00:20:27.241 } 00:20:27.241 Got JSON-RPC error response 00:20:27.241 response: 00:20:27.241 { 00:20:27.241 "code": -17, 00:20:27.241 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:27.241 } 00:20:27.241 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:27.241 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.241 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.241 02:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.241 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.241 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:27.500 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:27.500 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:27.500 02:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:27.759 [2024-07-11 02:26:18.036686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:27.759 [2024-07-11 02:26:18.036729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.759 [2024-07-11 02:26:18.036746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2491120 00:20:27.759 [2024-07-11 02:26:18.036764] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.759 [2024-07-11 02:26:18.038370] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.759 [2024-07-11 02:26:18.038399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:27.759 [2024-07-11 02:26:18.038463] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:27.759 [2024-07-11 02:26:18.038489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:27.759 pt1 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.759 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.041 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:28.041 "name": "raid_bdev1", 00:20:28.041 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:28.041 "strip_size_kb": 0, 00:20:28.041 "state": "configuring", 00:20:28.041 "raid_level": "raid1", 00:20:28.041 "superblock": true, 00:20:28.041 "num_base_bdevs": 3, 00:20:28.041 "num_base_bdevs_discovered": 1, 00:20:28.041 "num_base_bdevs_operational": 3, 00:20:28.041 "base_bdevs_list": [ 00:20:28.041 { 00:20:28.041 "name": "pt1", 00:20:28.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:28.041 "is_configured": true, 00:20:28.041 "data_offset": 2048, 00:20:28.041 "data_size": 63488 00:20:28.041 }, 00:20:28.041 { 00:20:28.041 "name": null, 00:20:28.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:28.041 "is_configured": false, 00:20:28.041 "data_offset": 2048, 00:20:28.041 "data_size": 63488 00:20:28.041 }, 00:20:28.041 { 00:20:28.041 "name": null, 00:20:28.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:28.041 "is_configured": false, 00:20:28.041 "data_offset": 2048, 00:20:28.041 "data_size": 63488 00:20:28.041 } 00:20:28.041 ] 00:20:28.041 }' 00:20:28.041 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:28.041 02:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.611 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:20:28.611 02:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:28.870 [2024-07-11 02:26:19.055396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:28.870 [2024-07-11 02:26:19.055446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.870 [2024-07-11 02:26:19.055464] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248a820 00:20:28.870 [2024-07-11 02:26:19.055476] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.870 [2024-07-11 02:26:19.055818] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.870 [2024-07-11 02:26:19.055836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:28.870 [2024-07-11 02:26:19.055896] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:28.870 [2024-07-11 02:26:19.055915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:28.870 pt2 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:28.870 [2024-07-11 02:26:19.223853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.870 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.129 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.129 "name": "raid_bdev1", 00:20:29.129 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:29.129 "strip_size_kb": 0, 00:20:29.129 "state": "configuring", 00:20:29.129 "raid_level": "raid1", 00:20:29.129 "superblock": true, 00:20:29.129 "num_base_bdevs": 3, 00:20:29.129 "num_base_bdevs_discovered": 1, 00:20:29.129 "num_base_bdevs_operational": 3, 00:20:29.129 "base_bdevs_list": [ 00:20:29.129 { 00:20:29.129 "name": "pt1", 00:20:29.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:29.129 "is_configured": true, 00:20:29.129 "data_offset": 2048, 00:20:29.129 "data_size": 63488 00:20:29.129 }, 00:20:29.129 { 00:20:29.129 "name": null, 00:20:29.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.129 "is_configured": false, 00:20:29.129 "data_offset": 2048, 00:20:29.129 "data_size": 63488 00:20:29.129 }, 00:20:29.129 { 00:20:29.129 "name": null, 00:20:29.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:29.129 "is_configured": false, 00:20:29.129 "data_offset": 2048, 00:20:29.129 "data_size": 63488 00:20:29.129 } 00:20:29.129 ] 00:20:29.129 }' 00:20:29.129 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.129 02:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.697 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:29.697 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:29.697 02:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:29.956 [2024-07-11 02:26:20.230527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:29.956 [2024-07-11 02:26:20.230585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.956 [2024-07-11 02:26:20.230603] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22dff10 00:20:29.956 [2024-07-11 02:26:20.230615] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.956 [2024-07-11 02:26:20.230958] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.956 [2024-07-11 02:26:20.230975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:29.956 [2024-07-11 02:26:20.231037] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:29.956 [2024-07-11 02:26:20.231055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:29.956 pt2 00:20:29.956 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:29.956 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:29.956 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:30.214 [2024-07-11 02:26:20.475177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:30.214 [2024-07-11 02:26:20.475211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.214 [2024-07-11 02:26:20.475226] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22e0160 00:20:30.214 [2024-07-11 02:26:20.475237] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.214 [2024-07-11 02:26:20.475513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.214 [2024-07-11 02:26:20.475529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:30.214 [2024-07-11 02:26:20.475575] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:30.214 [2024-07-11 02:26:20.475592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:30.214 [2024-07-11 02:26:20.475695] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22ddaa0 00:20:30.214 [2024-07-11 02:26:20.475705] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:30.214 [2024-07-11 02:26:20.475873] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2488d40 00:20:30.214 [2024-07-11 02:26:20.475999] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22ddaa0 00:20:30.214 [2024-07-11 02:26:20.476014] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22ddaa0 00:20:30.214 [2024-07-11 02:26:20.476107] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.214 pt3 00:20:30.214 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:30.214 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.215 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.473 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.473 "name": "raid_bdev1", 00:20:30.473 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:30.473 "strip_size_kb": 0, 00:20:30.473 "state": "online", 00:20:30.473 "raid_level": "raid1", 00:20:30.473 "superblock": true, 00:20:30.473 "num_base_bdevs": 3, 00:20:30.473 "num_base_bdevs_discovered": 3, 00:20:30.473 "num_base_bdevs_operational": 3, 00:20:30.473 "base_bdevs_list": [ 00:20:30.473 { 00:20:30.473 "name": "pt1", 00:20:30.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:30.473 "is_configured": true, 00:20:30.473 "data_offset": 2048, 00:20:30.473 "data_size": 63488 00:20:30.473 }, 00:20:30.473 { 00:20:30.473 "name": "pt2", 00:20:30.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.473 "is_configured": true, 00:20:30.473 "data_offset": 2048, 00:20:30.473 "data_size": 63488 00:20:30.473 }, 00:20:30.473 { 00:20:30.473 "name": "pt3", 00:20:30.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:30.473 "is_configured": true, 00:20:30.473 "data_offset": 2048, 00:20:30.473 "data_size": 63488 00:20:30.473 } 00:20:30.473 ] 00:20:30.473 }' 00:20:30.473 02:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.473 02:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:31.040 [2024-07-11 02:26:21.377878] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:31.040 "name": "raid_bdev1", 00:20:31.040 "aliases": [ 00:20:31.040 "112753e1-041a-49cd-b692-ed16696af937" 00:20:31.040 ], 00:20:31.040 "product_name": "Raid Volume", 00:20:31.040 "block_size": 512, 00:20:31.040 "num_blocks": 63488, 00:20:31.040 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:31.040 "assigned_rate_limits": { 00:20:31.040 "rw_ios_per_sec": 0, 00:20:31.040 "rw_mbytes_per_sec": 0, 00:20:31.040 "r_mbytes_per_sec": 0, 00:20:31.040 "w_mbytes_per_sec": 0 00:20:31.040 }, 00:20:31.040 "claimed": false, 00:20:31.040 "zoned": false, 00:20:31.040 "supported_io_types": { 00:20:31.040 "read": true, 00:20:31.040 "write": true, 00:20:31.040 "unmap": false, 00:20:31.040 "flush": false, 00:20:31.040 "reset": true, 00:20:31.040 "nvme_admin": false, 00:20:31.040 "nvme_io": false, 00:20:31.040 "nvme_io_md": false, 00:20:31.040 "write_zeroes": true, 00:20:31.040 "zcopy": false, 00:20:31.040 "get_zone_info": false, 00:20:31.040 "zone_management": false, 00:20:31.040 "zone_append": false, 00:20:31.040 "compare": false, 00:20:31.040 "compare_and_write": false, 00:20:31.040 "abort": false, 00:20:31.040 "seek_hole": false, 00:20:31.040 "seek_data": false, 00:20:31.040 "copy": false, 00:20:31.040 "nvme_iov_md": false 00:20:31.040 }, 00:20:31.040 "memory_domains": [ 00:20:31.040 { 00:20:31.040 "dma_device_id": "system", 00:20:31.040 "dma_device_type": 1 00:20:31.040 }, 00:20:31.040 { 00:20:31.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.040 "dma_device_type": 2 00:20:31.040 }, 00:20:31.040 { 00:20:31.040 "dma_device_id": "system", 00:20:31.040 "dma_device_type": 1 00:20:31.040 }, 00:20:31.040 { 00:20:31.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.040 "dma_device_type": 2 00:20:31.040 }, 00:20:31.040 { 00:20:31.040 "dma_device_id": "system", 00:20:31.040 "dma_device_type": 1 00:20:31.040 }, 00:20:31.040 { 00:20:31.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.040 "dma_device_type": 2 00:20:31.040 } 00:20:31.040 ], 00:20:31.040 "driver_specific": { 00:20:31.040 "raid": { 00:20:31.040 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:31.040 "strip_size_kb": 0, 00:20:31.040 "state": "online", 00:20:31.040 "raid_level": "raid1", 00:20:31.040 "superblock": true, 00:20:31.040 "num_base_bdevs": 3, 00:20:31.040 "num_base_bdevs_discovered": 3, 00:20:31.040 "num_base_bdevs_operational": 3, 00:20:31.040 "base_bdevs_list": [ 00:20:31.040 { 00:20:31.040 "name": "pt1", 00:20:31.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.040 "is_configured": true, 00:20:31.040 "data_offset": 2048, 00:20:31.040 "data_size": 63488 00:20:31.040 }, 00:20:31.040 { 00:20:31.040 "name": "pt2", 00:20:31.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.040 "is_configured": true, 00:20:31.040 "data_offset": 2048, 00:20:31.040 "data_size": 63488 00:20:31.040 }, 00:20:31.040 { 00:20:31.040 "name": "pt3", 00:20:31.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:31.040 "is_configured": true, 00:20:31.040 "data_offset": 2048, 00:20:31.040 "data_size": 63488 00:20:31.040 } 00:20:31.040 ] 00:20:31.040 } 00:20:31.040 } 00:20:31.040 }' 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:31.040 pt2 00:20:31.040 pt3' 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:31.040 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.299 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.299 "name": "pt1", 00:20:31.299 "aliases": [ 00:20:31.299 "00000000-0000-0000-0000-000000000001" 00:20:31.299 ], 00:20:31.299 "product_name": "passthru", 00:20:31.299 "block_size": 512, 00:20:31.299 "num_blocks": 65536, 00:20:31.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.299 "assigned_rate_limits": { 00:20:31.299 "rw_ios_per_sec": 0, 00:20:31.299 "rw_mbytes_per_sec": 0, 00:20:31.299 "r_mbytes_per_sec": 0, 00:20:31.299 "w_mbytes_per_sec": 0 00:20:31.299 }, 00:20:31.299 "claimed": true, 00:20:31.299 "claim_type": "exclusive_write", 00:20:31.299 "zoned": false, 00:20:31.299 "supported_io_types": { 00:20:31.299 "read": true, 00:20:31.299 "write": true, 00:20:31.299 "unmap": true, 00:20:31.299 "flush": true, 00:20:31.299 "reset": true, 00:20:31.299 "nvme_admin": false, 00:20:31.299 "nvme_io": false, 00:20:31.299 "nvme_io_md": false, 00:20:31.299 "write_zeroes": true, 00:20:31.299 "zcopy": true, 00:20:31.299 "get_zone_info": false, 00:20:31.299 "zone_management": false, 00:20:31.299 "zone_append": false, 00:20:31.299 "compare": false, 00:20:31.299 "compare_and_write": false, 00:20:31.299 "abort": true, 00:20:31.299 "seek_hole": false, 00:20:31.299 "seek_data": false, 00:20:31.299 "copy": true, 00:20:31.299 "nvme_iov_md": false 00:20:31.299 }, 00:20:31.299 "memory_domains": [ 00:20:31.299 { 00:20:31.299 "dma_device_id": "system", 00:20:31.299 "dma_device_type": 1 00:20:31.299 }, 00:20:31.299 { 00:20:31.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.299 "dma_device_type": 2 00:20:31.299 } 00:20:31.299 ], 00:20:31.299 "driver_specific": { 00:20:31.299 "passthru": { 00:20:31.299 "name": "pt1", 00:20:31.299 "base_bdev_name": "malloc1" 00:20:31.299 } 00:20:31.299 } 00:20:31.299 }' 00:20:31.299 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.558 02:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.816 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.816 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.816 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.816 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:31.816 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.075 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.075 "name": "pt2", 00:20:32.075 "aliases": [ 00:20:32.075 "00000000-0000-0000-0000-000000000002" 00:20:32.075 ], 00:20:32.075 "product_name": "passthru", 00:20:32.075 "block_size": 512, 00:20:32.075 "num_blocks": 65536, 00:20:32.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.075 "assigned_rate_limits": { 00:20:32.075 "rw_ios_per_sec": 0, 00:20:32.075 "rw_mbytes_per_sec": 0, 00:20:32.075 "r_mbytes_per_sec": 0, 00:20:32.075 "w_mbytes_per_sec": 0 00:20:32.075 }, 00:20:32.075 "claimed": true, 00:20:32.075 "claim_type": "exclusive_write", 00:20:32.075 "zoned": false, 00:20:32.075 "supported_io_types": { 00:20:32.075 "read": true, 00:20:32.075 "write": true, 00:20:32.075 "unmap": true, 00:20:32.075 "flush": true, 00:20:32.075 "reset": true, 00:20:32.075 "nvme_admin": false, 00:20:32.075 "nvme_io": false, 00:20:32.075 "nvme_io_md": false, 00:20:32.075 "write_zeroes": true, 00:20:32.075 "zcopy": true, 00:20:32.075 "get_zone_info": false, 00:20:32.075 "zone_management": false, 00:20:32.075 "zone_append": false, 00:20:32.075 "compare": false, 00:20:32.075 "compare_and_write": false, 00:20:32.075 "abort": true, 00:20:32.075 "seek_hole": false, 00:20:32.075 "seek_data": false, 00:20:32.075 "copy": true, 00:20:32.075 "nvme_iov_md": false 00:20:32.075 }, 00:20:32.075 "memory_domains": [ 00:20:32.075 { 00:20:32.075 "dma_device_id": "system", 00:20:32.075 "dma_device_type": 1 00:20:32.075 }, 00:20:32.075 { 00:20:32.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.075 "dma_device_type": 2 00:20:32.075 } 00:20:32.075 ], 00:20:32.075 "driver_specific": { 00:20:32.075 "passthru": { 00:20:32.075 "name": "pt2", 00:20:32.075 "base_bdev_name": "malloc2" 00:20:32.075 } 00:20:32.075 } 00:20:32.075 }' 00:20:32.075 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.075 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.075 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.075 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.075 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.076 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.076 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.334 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.334 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.334 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.334 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.334 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.335 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.335 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:32.335 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.593 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.593 "name": "pt3", 00:20:32.593 "aliases": [ 00:20:32.593 "00000000-0000-0000-0000-000000000003" 00:20:32.594 ], 00:20:32.594 "product_name": "passthru", 00:20:32.594 "block_size": 512, 00:20:32.594 "num_blocks": 65536, 00:20:32.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:32.594 "assigned_rate_limits": { 00:20:32.594 "rw_ios_per_sec": 0, 00:20:32.594 "rw_mbytes_per_sec": 0, 00:20:32.594 "r_mbytes_per_sec": 0, 00:20:32.594 "w_mbytes_per_sec": 0 00:20:32.594 }, 00:20:32.594 "claimed": true, 00:20:32.594 "claim_type": "exclusive_write", 00:20:32.594 "zoned": false, 00:20:32.594 "supported_io_types": { 00:20:32.594 "read": true, 00:20:32.594 "write": true, 00:20:32.594 "unmap": true, 00:20:32.594 "flush": true, 00:20:32.594 "reset": true, 00:20:32.594 "nvme_admin": false, 00:20:32.594 "nvme_io": false, 00:20:32.594 "nvme_io_md": false, 00:20:32.594 "write_zeroes": true, 00:20:32.594 "zcopy": true, 00:20:32.594 "get_zone_info": false, 00:20:32.594 "zone_management": false, 00:20:32.594 "zone_append": false, 00:20:32.594 "compare": false, 00:20:32.594 "compare_and_write": false, 00:20:32.594 "abort": true, 00:20:32.594 "seek_hole": false, 00:20:32.594 "seek_data": false, 00:20:32.594 "copy": true, 00:20:32.594 "nvme_iov_md": false 00:20:32.594 }, 00:20:32.594 "memory_domains": [ 00:20:32.594 { 00:20:32.594 "dma_device_id": "system", 00:20:32.594 "dma_device_type": 1 00:20:32.594 }, 00:20:32.594 { 00:20:32.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.594 "dma_device_type": 2 00:20:32.594 } 00:20:32.594 ], 00:20:32.594 "driver_specific": { 00:20:32.594 "passthru": { 00:20:32.594 "name": "pt3", 00:20:32.594 "base_bdev_name": "malloc3" 00:20:32.594 } 00:20:32.594 } 00:20:32.594 }' 00:20:32.594 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.594 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.594 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.594 02:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.853 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.853 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.853 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.853 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.853 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.853 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.853 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.112 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.112 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:33.112 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:33.112 [2024-07-11 02:26:23.515685] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 112753e1-041a-49cd-b692-ed16696af937 '!=' 112753e1-041a-49cd-b692-ed16696af937 ']' 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:33.379 [2024-07-11 02:26:23.764105] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.379 02:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.640 02:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.640 "name": "raid_bdev1", 00:20:33.640 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:33.640 "strip_size_kb": 0, 00:20:33.640 "state": "online", 00:20:33.640 "raid_level": "raid1", 00:20:33.640 "superblock": true, 00:20:33.640 "num_base_bdevs": 3, 00:20:33.640 "num_base_bdevs_discovered": 2, 00:20:33.640 "num_base_bdevs_operational": 2, 00:20:33.640 "base_bdevs_list": [ 00:20:33.640 { 00:20:33.640 "name": null, 00:20:33.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.640 "is_configured": false, 00:20:33.640 "data_offset": 2048, 00:20:33.640 "data_size": 63488 00:20:33.640 }, 00:20:33.640 { 00:20:33.640 "name": "pt2", 00:20:33.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.640 "is_configured": true, 00:20:33.640 "data_offset": 2048, 00:20:33.640 "data_size": 63488 00:20:33.640 }, 00:20:33.640 { 00:20:33.640 "name": "pt3", 00:20:33.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:33.640 "is_configured": true, 00:20:33.640 "data_offset": 2048, 00:20:33.640 "data_size": 63488 00:20:33.640 } 00:20:33.640 ] 00:20:33.640 }' 00:20:33.640 02:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.640 02:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.575 02:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:34.575 [2024-07-11 02:26:24.863018] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.575 [2024-07-11 02:26:24.863048] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.575 [2024-07-11 02:26:24.863107] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.575 [2024-07-11 02:26:24.863161] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.575 [2024-07-11 02:26:24.863173] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22ddaa0 name raid_bdev1, state offline 00:20:34.575 02:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.575 02:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:20:34.834 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:20:34.834 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:20:34.834 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:20:34.834 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:34.834 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:35.094 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:35.094 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:35.094 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:35.353 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:35.353 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:35.353 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:20:35.353 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:35.353 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:35.612 [2024-07-11 02:26:25.853770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:35.612 [2024-07-11 02:26:25.853820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.612 [2024-07-11 02:26:25.853836] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22e0160 00:20:35.612 [2024-07-11 02:26:25.853849] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.612 [2024-07-11 02:26:25.855476] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.612 [2024-07-11 02:26:25.855506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:35.612 [2024-07-11 02:26:25.855575] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:35.612 [2024-07-11 02:26:25.855602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:35.612 pt2 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.612 02:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.871 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.871 "name": "raid_bdev1", 00:20:35.871 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:35.871 "strip_size_kb": 0, 00:20:35.871 "state": "configuring", 00:20:35.871 "raid_level": "raid1", 00:20:35.871 "superblock": true, 00:20:35.871 "num_base_bdevs": 3, 00:20:35.871 "num_base_bdevs_discovered": 1, 00:20:35.871 "num_base_bdevs_operational": 2, 00:20:35.871 "base_bdevs_list": [ 00:20:35.871 { 00:20:35.871 "name": null, 00:20:35.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.871 "is_configured": false, 00:20:35.871 "data_offset": 2048, 00:20:35.871 "data_size": 63488 00:20:35.871 }, 00:20:35.871 { 00:20:35.871 "name": "pt2", 00:20:35.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.871 "is_configured": true, 00:20:35.871 "data_offset": 2048, 00:20:35.871 "data_size": 63488 00:20:35.871 }, 00:20:35.871 { 00:20:35.871 "name": null, 00:20:35.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.871 "is_configured": false, 00:20:35.871 "data_offset": 2048, 00:20:35.871 "data_size": 63488 00:20:35.871 } 00:20:35.871 ] 00:20:35.872 }' 00:20:35.872 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.872 02:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.439 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:20:36.439 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:36.439 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:20:36.439 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:36.699 [2024-07-11 02:26:26.936650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:36.699 [2024-07-11 02:26:26.936702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.699 [2024-07-11 02:26:26.936724] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22df940 00:20:36.699 [2024-07-11 02:26:26.936736] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.699 [2024-07-11 02:26:26.937087] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.699 [2024-07-11 02:26:26.937105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:36.699 [2024-07-11 02:26:26.937169] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:36.699 [2024-07-11 02:26:26.937187] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:36.699 [2024-07-11 02:26:26.937290] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x248e6c0 00:20:36.699 [2024-07-11 02:26:26.937300] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:36.699 [2024-07-11 02:26:26.937473] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x248f400 00:20:36.699 [2024-07-11 02:26:26.937597] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x248e6c0 00:20:36.699 [2024-07-11 02:26:26.937606] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x248e6c0 00:20:36.699 [2024-07-11 02:26:26.937700] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.699 pt3 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.699 02:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.959 02:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.959 "name": "raid_bdev1", 00:20:36.959 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:36.959 "strip_size_kb": 0, 00:20:36.959 "state": "online", 00:20:36.959 "raid_level": "raid1", 00:20:36.959 "superblock": true, 00:20:36.959 "num_base_bdevs": 3, 00:20:36.959 "num_base_bdevs_discovered": 2, 00:20:36.959 "num_base_bdevs_operational": 2, 00:20:36.959 "base_bdevs_list": [ 00:20:36.959 { 00:20:36.959 "name": null, 00:20:36.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.959 "is_configured": false, 00:20:36.959 "data_offset": 2048, 00:20:36.959 "data_size": 63488 00:20:36.959 }, 00:20:36.959 { 00:20:36.960 "name": "pt2", 00:20:36.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.960 "is_configured": true, 00:20:36.960 "data_offset": 2048, 00:20:36.960 "data_size": 63488 00:20:36.960 }, 00:20:36.960 { 00:20:36.960 "name": "pt3", 00:20:36.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.960 "is_configured": true, 00:20:36.960 "data_offset": 2048, 00:20:36.960 "data_size": 63488 00:20:36.960 } 00:20:36.960 ] 00:20:36.960 }' 00:20:36.960 02:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.960 02:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.527 02:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:37.786 [2024-07-11 02:26:28.031692] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.786 [2024-07-11 02:26:28.031718] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.786 [2024-07-11 02:26:28.031777] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.786 [2024-07-11 02:26:28.031834] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.786 [2024-07-11 02:26:28.031851] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248e6c0 name raid_bdev1, state offline 00:20:37.786 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:20:37.786 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.046 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:20:38.046 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:20:38.046 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:20:38.046 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:20:38.046 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:38.305 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:38.565 [2024-07-11 02:26:28.781635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:38.565 [2024-07-11 02:26:28.781675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.565 [2024-07-11 02:26:28.781691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24885b0 00:20:38.565 [2024-07-11 02:26:28.781702] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.565 [2024-07-11 02:26:28.783258] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.565 [2024-07-11 02:26:28.783286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:38.565 [2024-07-11 02:26:28.783348] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:38.565 [2024-07-11 02:26:28.783374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:38.565 [2024-07-11 02:26:28.783470] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:38.565 [2024-07-11 02:26:28.783483] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:38.565 [2024-07-11 02:26:28.783498] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248ef80 name raid_bdev1, state configuring 00:20:38.565 [2024-07-11 02:26:28.783520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:38.565 pt1 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.565 02:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.824 02:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.824 "name": "raid_bdev1", 00:20:38.824 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:38.824 "strip_size_kb": 0, 00:20:38.824 "state": "configuring", 00:20:38.824 "raid_level": "raid1", 00:20:38.824 "superblock": true, 00:20:38.824 "num_base_bdevs": 3, 00:20:38.824 "num_base_bdevs_discovered": 1, 00:20:38.824 "num_base_bdevs_operational": 2, 00:20:38.824 "base_bdevs_list": [ 00:20:38.824 { 00:20:38.824 "name": null, 00:20:38.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.824 "is_configured": false, 00:20:38.824 "data_offset": 2048, 00:20:38.824 "data_size": 63488 00:20:38.824 }, 00:20:38.824 { 00:20:38.824 "name": "pt2", 00:20:38.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.824 "is_configured": true, 00:20:38.824 "data_offset": 2048, 00:20:38.824 "data_size": 63488 00:20:38.824 }, 00:20:38.824 { 00:20:38.824 "name": null, 00:20:38.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.824 "is_configured": false, 00:20:38.824 "data_offset": 2048, 00:20:38.824 "data_size": 63488 00:20:38.824 } 00:20:38.824 ] 00:20:38.824 }' 00:20:38.824 02:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.824 02:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.391 02:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:20:39.391 02:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:39.650 02:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:20:39.650 02:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:39.910 [2024-07-11 02:26:30.133230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:39.910 [2024-07-11 02:26:30.133285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.910 [2024-07-11 02:26:30.133305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24899f0 00:20:39.910 [2024-07-11 02:26:30.133317] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.910 [2024-07-11 02:26:30.133661] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.910 [2024-07-11 02:26:30.133678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:39.910 [2024-07-11 02:26:30.133741] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:39.910 [2024-07-11 02:26:30.133768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:39.910 [2024-07-11 02:26:30.133869] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x248ec60 00:20:39.910 [2024-07-11 02:26:30.133879] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.910 [2024-07-11 02:26:30.134048] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2489c80 00:20:39.910 [2024-07-11 02:26:30.134173] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x248ec60 00:20:39.910 [2024-07-11 02:26:30.134183] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x248ec60 00:20:39.910 [2024-07-11 02:26:30.134277] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.910 pt3 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.910 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.169 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.169 "name": "raid_bdev1", 00:20:40.169 "uuid": "112753e1-041a-49cd-b692-ed16696af937", 00:20:40.169 "strip_size_kb": 0, 00:20:40.169 "state": "online", 00:20:40.169 "raid_level": "raid1", 00:20:40.169 "superblock": true, 00:20:40.169 "num_base_bdevs": 3, 00:20:40.169 "num_base_bdevs_discovered": 2, 00:20:40.169 "num_base_bdevs_operational": 2, 00:20:40.169 "base_bdevs_list": [ 00:20:40.169 { 00:20:40.169 "name": null, 00:20:40.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.169 "is_configured": false, 00:20:40.169 "data_offset": 2048, 00:20:40.169 "data_size": 63488 00:20:40.169 }, 00:20:40.169 { 00:20:40.169 "name": "pt2", 00:20:40.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.169 "is_configured": true, 00:20:40.169 "data_offset": 2048, 00:20:40.169 "data_size": 63488 00:20:40.169 }, 00:20:40.169 { 00:20:40.169 "name": "pt3", 00:20:40.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.169 "is_configured": true, 00:20:40.169 "data_offset": 2048, 00:20:40.169 "data_size": 63488 00:20:40.169 } 00:20:40.169 ] 00:20:40.169 }' 00:20:40.169 02:26:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.169 02:26:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.737 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:40.737 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:40.995 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:20:40.995 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:40.995 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:20:41.256 [2024-07-11 02:26:31.489131] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 112753e1-041a-49cd-b692-ed16696af937 '!=' 112753e1-041a-49cd-b692-ed16696af937 ']' 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1949587 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1949587 ']' 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1949587 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1949587 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1949587' 00:20:41.256 killing process with pid 1949587 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1949587 00:20:41.256 [2024-07-11 02:26:31.559197] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.256 [2024-07-11 02:26:31.559253] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.256 [2024-07-11 02:26:31.559309] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.256 [2024-07-11 02:26:31.559320] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x248ec60 name raid_bdev1, state offline 00:20:41.256 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1949587 00:20:41.256 [2024-07-11 02:26:31.588762] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.515 02:26:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:41.515 00:20:41.515 real 0m21.517s 00:20:41.515 user 0m39.148s 00:20:41.515 sys 0m4.074s 00:20:41.515 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.515 02:26:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.515 ************************************ 00:20:41.515 END TEST raid_superblock_test 00:20:41.515 ************************************ 00:20:41.515 02:26:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:41.515 02:26:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:20:41.515 02:26:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:41.515 02:26:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.515 02:26:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:41.515 ************************************ 00:20:41.515 START TEST raid_read_error_test 00:20:41.515 ************************************ 00:20:41.515 02:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:20:41.515 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:41.515 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:41.515 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:41.515 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:41.515 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.WcYwkOEV0h 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1952853 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1952853 /var/tmp/spdk-raid.sock 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1952853 ']' 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:41.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.516 02:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.775 [2024-07-11 02:26:31.963872] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:41.775 [2024-07-11 02:26:31.963940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1952853 ] 00:20:41.775 [2024-07-11 02:26:32.099726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.775 [2024-07-11 02:26:32.148213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.034 [2024-07-11 02:26:32.207210] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.034 [2024-07-11 02:26:32.207240] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.603 02:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.603 02:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:42.603 02:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:42.603 02:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:42.862 BaseBdev1_malloc 00:20:42.862 02:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:42.862 true 00:20:42.862 02:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:43.120 [2024-07-11 02:26:33.411131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:43.120 [2024-07-11 02:26:33.411175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.120 [2024-07-11 02:26:33.411196] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1645330 00:20:43.120 [2024-07-11 02:26:33.411209] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.120 [2024-07-11 02:26:33.412891] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.120 [2024-07-11 02:26:33.412921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:43.120 BaseBdev1 00:20:43.120 02:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:43.120 02:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:43.377 BaseBdev2_malloc 00:20:43.377 02:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:43.377 true 00:20:43.377 02:26:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:43.636 [2024-07-11 02:26:34.057221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:43.636 [2024-07-11 02:26:34.057267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.636 [2024-07-11 02:26:34.057288] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x163eb40 00:20:43.636 [2024-07-11 02:26:34.057300] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.636 [2024-07-11 02:26:34.058893] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.636 [2024-07-11 02:26:34.058921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:43.894 BaseBdev2 00:20:43.894 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:43.894 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:43.894 BaseBdev3_malloc 00:20:44.153 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:44.153 true 00:20:44.153 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:44.411 [2024-07-11 02:26:34.659200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:44.411 [2024-07-11 02:26:34.659240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.411 [2024-07-11 02:26:34.659258] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16420f0 00:20:44.411 [2024-07-11 02:26:34.659270] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.411 [2024-07-11 02:26:34.660608] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.411 [2024-07-11 02:26:34.660634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:44.411 BaseBdev3 00:20:44.411 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:44.411 [2024-07-11 02:26:34.831683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.411 [2024-07-11 02:26:34.832829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.411 [2024-07-11 02:26:34.832894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:44.411 [2024-07-11 02:26:34.833088] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1493870 00:20:44.411 [2024-07-11 02:26:34.833100] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:44.411 [2024-07-11 02:26:34.833269] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14914f0 00:20:44.411 [2024-07-11 02:26:34.833411] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1493870 00:20:44.411 [2024-07-11 02:26:34.833420] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1493870 00:20:44.411 [2024-07-11 02:26:34.833516] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.670 02:26:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.930 02:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.930 "name": "raid_bdev1", 00:20:44.930 "uuid": "68e5bdac-2d06-4d08-a245-7e1774502a74", 00:20:44.930 "strip_size_kb": 0, 00:20:44.930 "state": "online", 00:20:44.930 "raid_level": "raid1", 00:20:44.930 "superblock": true, 00:20:44.930 "num_base_bdevs": 3, 00:20:44.930 "num_base_bdevs_discovered": 3, 00:20:44.930 "num_base_bdevs_operational": 3, 00:20:44.930 "base_bdevs_list": [ 00:20:44.930 { 00:20:44.930 "name": "BaseBdev1", 00:20:44.930 "uuid": "be865c02-8718-54d3-aa76-848bb55bffdc", 00:20:44.930 "is_configured": true, 00:20:44.930 "data_offset": 2048, 00:20:44.930 "data_size": 63488 00:20:44.930 }, 00:20:44.930 { 00:20:44.930 "name": "BaseBdev2", 00:20:44.930 "uuid": "0113f872-b241-585a-9752-93c9b79b42dd", 00:20:44.930 "is_configured": true, 00:20:44.930 "data_offset": 2048, 00:20:44.930 "data_size": 63488 00:20:44.930 }, 00:20:44.930 { 00:20:44.930 "name": "BaseBdev3", 00:20:44.930 "uuid": "40e22525-5b89-52f2-bbdb-40ddd475e1e9", 00:20:44.930 "is_configured": true, 00:20:44.930 "data_offset": 2048, 00:20:44.930 "data_size": 63488 00:20:44.930 } 00:20:44.930 ] 00:20:44.930 }' 00:20:44.930 02:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.930 02:26:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.497 02:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:45.497 02:26:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:45.497 [2024-07-11 02:26:35.886765] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1490e30 00:20:46.434 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.693 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.694 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.694 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.694 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.694 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.694 02:26:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.953 02:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.953 "name": "raid_bdev1", 00:20:46.953 "uuid": "68e5bdac-2d06-4d08-a245-7e1774502a74", 00:20:46.953 "strip_size_kb": 0, 00:20:46.953 "state": "online", 00:20:46.953 "raid_level": "raid1", 00:20:46.953 "superblock": true, 00:20:46.953 "num_base_bdevs": 3, 00:20:46.953 "num_base_bdevs_discovered": 3, 00:20:46.953 "num_base_bdevs_operational": 3, 00:20:46.953 "base_bdevs_list": [ 00:20:46.953 { 00:20:46.953 "name": "BaseBdev1", 00:20:46.953 "uuid": "be865c02-8718-54d3-aa76-848bb55bffdc", 00:20:46.953 "is_configured": true, 00:20:46.953 "data_offset": 2048, 00:20:46.953 "data_size": 63488 00:20:46.953 }, 00:20:46.953 { 00:20:46.953 "name": "BaseBdev2", 00:20:46.953 "uuid": "0113f872-b241-585a-9752-93c9b79b42dd", 00:20:46.953 "is_configured": true, 00:20:46.953 "data_offset": 2048, 00:20:46.953 "data_size": 63488 00:20:46.953 }, 00:20:46.953 { 00:20:46.953 "name": "BaseBdev3", 00:20:46.953 "uuid": "40e22525-5b89-52f2-bbdb-40ddd475e1e9", 00:20:46.953 "is_configured": true, 00:20:46.953 "data_offset": 2048, 00:20:46.953 "data_size": 63488 00:20:46.953 } 00:20:46.953 ] 00:20:46.953 }' 00:20:46.953 02:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.953 02:26:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.521 02:26:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:47.780 [2024-07-11 02:26:37.984980] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.780 [2024-07-11 02:26:37.985021] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.780 [2024-07-11 02:26:37.988163] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.780 [2024-07-11 02:26:37.988199] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.780 [2024-07-11 02:26:37.988295] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.780 [2024-07-11 02:26:37.988307] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1493870 name raid_bdev1, state offline 00:20:47.780 0 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1952853 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1952853 ']' 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1952853 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1952853 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1952853' 00:20:47.780 killing process with pid 1952853 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1952853 00:20:47.780 [2024-07-11 02:26:38.068469] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:47.780 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1952853 00:20:47.780 [2024-07-11 02:26:38.089952] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.WcYwkOEV0h 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:48.040 00:20:48.040 real 0m6.425s 00:20:48.040 user 0m10.049s 00:20:48.040 sys 0m1.157s 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:48.040 02:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.040 ************************************ 00:20:48.040 END TEST raid_read_error_test 00:20:48.040 ************************************ 00:20:48.040 02:26:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:48.040 02:26:38 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:20:48.040 02:26:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:48.040 02:26:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.040 02:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:48.040 ************************************ 00:20:48.040 START TEST raid_write_error_test 00:20:48.040 ************************************ 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XrIcxWczYS 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1953831 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1953831 /var/tmp/spdk-raid.sock 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1953831 ']' 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:48.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.040 02:26:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.300 [2024-07-11 02:26:38.481039] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:48.300 [2024-07-11 02:26:38.481109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953831 ] 00:20:48.300 [2024-07-11 02:26:38.618936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.300 [2024-07-11 02:26:38.667275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.559 [2024-07-11 02:26:38.724402] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.559 [2024-07-11 02:26:38.724427] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:49.127 02:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.127 02:26:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:49.127 02:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:49.127 02:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:49.387 BaseBdev1_malloc 00:20:49.387 02:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:49.646 true 00:20:49.646 02:26:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:49.905 [2024-07-11 02:26:40.092586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:49.905 [2024-07-11 02:26:40.092636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.905 [2024-07-11 02:26:40.092656] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1184330 00:20:49.905 [2024-07-11 02:26:40.092669] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.905 [2024-07-11 02:26:40.094414] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.905 [2024-07-11 02:26:40.094444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:49.905 BaseBdev1 00:20:49.905 02:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:49.905 02:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:50.167 BaseBdev2_malloc 00:20:50.167 02:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:50.167 true 00:20:50.167 02:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:50.541 [2024-07-11 02:26:40.686625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:50.541 [2024-07-11 02:26:40.686669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.541 [2024-07-11 02:26:40.686687] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x117db40 00:20:50.541 [2024-07-11 02:26:40.686700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.541 [2024-07-11 02:26:40.688079] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.541 [2024-07-11 02:26:40.688106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:50.541 BaseBdev2 00:20:50.541 02:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:50.541 02:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:50.800 BaseBdev3_malloc 00:20:50.800 02:26:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:50.800 true 00:20:50.800 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:51.059 [2024-07-11 02:26:41.449146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:51.059 [2024-07-11 02:26:41.449195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.059 [2024-07-11 02:26:41.449215] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11810f0 00:20:51.059 [2024-07-11 02:26:41.449227] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.059 [2024-07-11 02:26:41.450572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.059 [2024-07-11 02:26:41.450598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:51.059 BaseBdev3 00:20:51.059 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:51.318 [2024-07-11 02:26:41.617616] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:51.318 [2024-07-11 02:26:41.618745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:51.318 [2024-07-11 02:26:41.618816] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:51.318 [2024-07-11 02:26:41.619014] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfd2870 00:20:51.318 [2024-07-11 02:26:41.619025] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:51.318 [2024-07-11 02:26:41.619195] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfd04f0 00:20:51.318 [2024-07-11 02:26:41.619338] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfd2870 00:20:51.318 [2024-07-11 02:26:41.619348] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfd2870 00:20:51.318 [2024-07-11 02:26:41.619442] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.318 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.579 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.579 "name": "raid_bdev1", 00:20:51.579 "uuid": "5d16a1a4-3c93-4dbe-b614-690ca2a42d4e", 00:20:51.579 "strip_size_kb": 0, 00:20:51.579 "state": "online", 00:20:51.579 "raid_level": "raid1", 00:20:51.579 "superblock": true, 00:20:51.579 "num_base_bdevs": 3, 00:20:51.579 "num_base_bdevs_discovered": 3, 00:20:51.579 "num_base_bdevs_operational": 3, 00:20:51.579 "base_bdevs_list": [ 00:20:51.579 { 00:20:51.579 "name": "BaseBdev1", 00:20:51.579 "uuid": "f61378f2-5977-5c85-992c-fe294202e427", 00:20:51.579 "is_configured": true, 00:20:51.579 "data_offset": 2048, 00:20:51.579 "data_size": 63488 00:20:51.579 }, 00:20:51.579 { 00:20:51.579 "name": "BaseBdev2", 00:20:51.579 "uuid": "a5393f46-df1c-5807-82b0-d05f37b1e125", 00:20:51.579 "is_configured": true, 00:20:51.579 "data_offset": 2048, 00:20:51.579 "data_size": 63488 00:20:51.579 }, 00:20:51.579 { 00:20:51.579 "name": "BaseBdev3", 00:20:51.579 "uuid": "d15dd5c8-9b23-5b15-9ec3-bd2a2dfe3ec6", 00:20:51.579 "is_configured": true, 00:20:51.579 "data_offset": 2048, 00:20:51.579 "data_size": 63488 00:20:51.579 } 00:20:51.579 ] 00:20:51.579 }' 00:20:51.579 02:26:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.579 02:26:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.146 02:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:52.146 02:26:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:52.405 [2024-07-11 02:26:42.608575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfcfe30 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:53.342 [2024-07-11 02:26:43.733576] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:53.342 [2024-07-11 02:26:43.733630] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:53.342 [2024-07-11 02:26:43.733833] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xfcfe30 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.342 02:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.602 02:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.602 "name": "raid_bdev1", 00:20:53.602 "uuid": "5d16a1a4-3c93-4dbe-b614-690ca2a42d4e", 00:20:53.602 "strip_size_kb": 0, 00:20:53.602 "state": "online", 00:20:53.602 "raid_level": "raid1", 00:20:53.602 "superblock": true, 00:20:53.602 "num_base_bdevs": 3, 00:20:53.602 "num_base_bdevs_discovered": 2, 00:20:53.602 "num_base_bdevs_operational": 2, 00:20:53.602 "base_bdevs_list": [ 00:20:53.602 { 00:20:53.602 "name": null, 00:20:53.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.602 "is_configured": false, 00:20:53.602 "data_offset": 2048, 00:20:53.602 "data_size": 63488 00:20:53.602 }, 00:20:53.602 { 00:20:53.602 "name": "BaseBdev2", 00:20:53.602 "uuid": "a5393f46-df1c-5807-82b0-d05f37b1e125", 00:20:53.602 "is_configured": true, 00:20:53.602 "data_offset": 2048, 00:20:53.602 "data_size": 63488 00:20:53.602 }, 00:20:53.602 { 00:20:53.602 "name": "BaseBdev3", 00:20:53.602 "uuid": "d15dd5c8-9b23-5b15-9ec3-bd2a2dfe3ec6", 00:20:53.602 "is_configured": true, 00:20:53.602 "data_offset": 2048, 00:20:53.602 "data_size": 63488 00:20:53.602 } 00:20:53.602 ] 00:20:53.602 }' 00:20:53.602 02:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.602 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:54.540 [2024-07-11 02:26:44.764260] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:54.540 [2024-07-11 02:26:44.764301] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:54.540 [2024-07-11 02:26:44.767427] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.540 [2024-07-11 02:26:44.767462] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.540 [2024-07-11 02:26:44.767534] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:54.540 [2024-07-11 02:26:44.767546] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfd2870 name raid_bdev1, state offline 00:20:54.540 0 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1953831 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1953831 ']' 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1953831 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1953831 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1953831' 00:20:54.540 killing process with pid 1953831 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1953831 00:20:54.540 [2024-07-11 02:26:44.847789] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:54.540 02:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1953831 00:20:54.540 [2024-07-11 02:26:44.868847] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XrIcxWczYS 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:54.800 00:20:54.800 real 0m6.689s 00:20:54.800 user 0m10.534s 00:20:54.800 sys 0m1.202s 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.800 02:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.800 ************************************ 00:20:54.800 END TEST raid_write_error_test 00:20:54.800 ************************************ 00:20:54.800 02:26:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:54.800 02:26:45 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:20:54.800 02:26:45 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:54.800 02:26:45 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:20:54.800 02:26:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:54.800 02:26:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.800 02:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:54.800 ************************************ 00:20:54.800 START TEST raid_state_function_test 00:20:54.800 ************************************ 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1954810 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1954810' 00:20:54.800 Process raid pid: 1954810 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1954810 /var/tmp/spdk-raid.sock 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1954810 ']' 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:54.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.800 02:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.059 [2024-07-11 02:26:45.249545] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:55.059 [2024-07-11 02:26:45.249623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.059 [2024-07-11 02:26:45.402352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.059 [2024-07-11 02:26:45.453916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.318 [2024-07-11 02:26:45.517808] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.318 [2024-07-11 02:26:45.517841] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.886 02:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.886 02:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:55.886 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:56.147 [2024-07-11 02:26:46.407158] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:56.148 [2024-07-11 02:26:46.407200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:56.148 [2024-07-11 02:26:46.407211] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:56.148 [2024-07-11 02:26:46.407222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:56.148 [2024-07-11 02:26:46.407231] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:56.148 [2024-07-11 02:26:46.407242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:56.148 [2024-07-11 02:26:46.407250] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:56.148 [2024-07-11 02:26:46.407261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.148 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.408 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.408 "name": "Existed_Raid", 00:20:56.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.408 "strip_size_kb": 64, 00:20:56.408 "state": "configuring", 00:20:56.408 "raid_level": "raid0", 00:20:56.408 "superblock": false, 00:20:56.408 "num_base_bdevs": 4, 00:20:56.408 "num_base_bdevs_discovered": 0, 00:20:56.408 "num_base_bdevs_operational": 4, 00:20:56.408 "base_bdevs_list": [ 00:20:56.408 { 00:20:56.408 "name": "BaseBdev1", 00:20:56.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.408 "is_configured": false, 00:20:56.408 "data_offset": 0, 00:20:56.408 "data_size": 0 00:20:56.408 }, 00:20:56.408 { 00:20:56.408 "name": "BaseBdev2", 00:20:56.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.408 "is_configured": false, 00:20:56.408 "data_offset": 0, 00:20:56.408 "data_size": 0 00:20:56.408 }, 00:20:56.408 { 00:20:56.408 "name": "BaseBdev3", 00:20:56.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.408 "is_configured": false, 00:20:56.408 "data_offset": 0, 00:20:56.408 "data_size": 0 00:20:56.408 }, 00:20:56.408 { 00:20:56.408 "name": "BaseBdev4", 00:20:56.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.408 "is_configured": false, 00:20:56.408 "data_offset": 0, 00:20:56.408 "data_size": 0 00:20:56.408 } 00:20:56.408 ] 00:20:56.408 }' 00:20:56.408 02:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.408 02:26:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.976 02:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:57.235 [2024-07-11 02:26:47.497903] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:57.235 [2024-07-11 02:26:47.497934] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcb05a0 name Existed_Raid, state configuring 00:20:57.235 02:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:57.495 [2024-07-11 02:26:47.738562] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:57.495 [2024-07-11 02:26:47.738592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:57.495 [2024-07-11 02:26:47.738602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:57.495 [2024-07-11 02:26:47.738613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:57.495 [2024-07-11 02:26:47.738622] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:57.495 [2024-07-11 02:26:47.738633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:57.495 [2024-07-11 02:26:47.738641] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:57.495 [2024-07-11 02:26:47.738652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:57.495 02:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:57.754 [2024-07-11 02:26:47.994182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.754 BaseBdev1 00:20:57.754 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:57.754 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:57.755 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:57.755 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:57.755 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:57.755 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:57.755 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:58.014 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:58.273 [ 00:20:58.273 { 00:20:58.273 "name": "BaseBdev1", 00:20:58.273 "aliases": [ 00:20:58.273 "b58f9a45-a474-4769-8a2e-712a3f127892" 00:20:58.273 ], 00:20:58.273 "product_name": "Malloc disk", 00:20:58.273 "block_size": 512, 00:20:58.273 "num_blocks": 65536, 00:20:58.273 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:20:58.273 "assigned_rate_limits": { 00:20:58.273 "rw_ios_per_sec": 0, 00:20:58.273 "rw_mbytes_per_sec": 0, 00:20:58.273 "r_mbytes_per_sec": 0, 00:20:58.273 "w_mbytes_per_sec": 0 00:20:58.273 }, 00:20:58.273 "claimed": true, 00:20:58.273 "claim_type": "exclusive_write", 00:20:58.273 "zoned": false, 00:20:58.273 "supported_io_types": { 00:20:58.273 "read": true, 00:20:58.273 "write": true, 00:20:58.273 "unmap": true, 00:20:58.273 "flush": true, 00:20:58.273 "reset": true, 00:20:58.273 "nvme_admin": false, 00:20:58.273 "nvme_io": false, 00:20:58.273 "nvme_io_md": false, 00:20:58.273 "write_zeroes": true, 00:20:58.273 "zcopy": true, 00:20:58.273 "get_zone_info": false, 00:20:58.273 "zone_management": false, 00:20:58.273 "zone_append": false, 00:20:58.273 "compare": false, 00:20:58.273 "compare_and_write": false, 00:20:58.273 "abort": true, 00:20:58.273 "seek_hole": false, 00:20:58.273 "seek_data": false, 00:20:58.273 "copy": true, 00:20:58.273 "nvme_iov_md": false 00:20:58.273 }, 00:20:58.273 "memory_domains": [ 00:20:58.273 { 00:20:58.273 "dma_device_id": "system", 00:20:58.273 "dma_device_type": 1 00:20:58.273 }, 00:20:58.273 { 00:20:58.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.273 "dma_device_type": 2 00:20:58.273 } 00:20:58.273 ], 00:20:58.273 "driver_specific": {} 00:20:58.273 } 00:20:58.273 ] 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.273 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.533 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:58.533 "name": "Existed_Raid", 00:20:58.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.533 "strip_size_kb": 64, 00:20:58.533 "state": "configuring", 00:20:58.533 "raid_level": "raid0", 00:20:58.533 "superblock": false, 00:20:58.533 "num_base_bdevs": 4, 00:20:58.533 "num_base_bdevs_discovered": 1, 00:20:58.533 "num_base_bdevs_operational": 4, 00:20:58.533 "base_bdevs_list": [ 00:20:58.533 { 00:20:58.533 "name": "BaseBdev1", 00:20:58.533 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:20:58.533 "is_configured": true, 00:20:58.533 "data_offset": 0, 00:20:58.533 "data_size": 65536 00:20:58.533 }, 00:20:58.533 { 00:20:58.533 "name": "BaseBdev2", 00:20:58.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.533 "is_configured": false, 00:20:58.533 "data_offset": 0, 00:20:58.533 "data_size": 0 00:20:58.533 }, 00:20:58.533 { 00:20:58.533 "name": "BaseBdev3", 00:20:58.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.533 "is_configured": false, 00:20:58.533 "data_offset": 0, 00:20:58.533 "data_size": 0 00:20:58.533 }, 00:20:58.533 { 00:20:58.533 "name": "BaseBdev4", 00:20:58.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.533 "is_configured": false, 00:20:58.533 "data_offset": 0, 00:20:58.533 "data_size": 0 00:20:58.533 } 00:20:58.533 ] 00:20:58.533 }' 00:20:58.533 02:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:58.533 02:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.101 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:59.101 [2024-07-11 02:26:49.510229] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.101 [2024-07-11 02:26:49.510267] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcafed0 name Existed_Raid, state configuring 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:59.361 [2024-07-11 02:26:49.686729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.361 [2024-07-11 02:26:49.688132] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.361 [2024-07-11 02:26:49.688168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.361 [2024-07-11 02:26:49.688178] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:59.361 [2024-07-11 02:26:49.688190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:59.361 [2024-07-11 02:26:49.688199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:59.361 [2024-07-11 02:26:49.688209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.361 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.621 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.621 "name": "Existed_Raid", 00:20:59.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.621 "strip_size_kb": 64, 00:20:59.621 "state": "configuring", 00:20:59.621 "raid_level": "raid0", 00:20:59.621 "superblock": false, 00:20:59.621 "num_base_bdevs": 4, 00:20:59.621 "num_base_bdevs_discovered": 1, 00:20:59.621 "num_base_bdevs_operational": 4, 00:20:59.621 "base_bdevs_list": [ 00:20:59.621 { 00:20:59.621 "name": "BaseBdev1", 00:20:59.621 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:20:59.621 "is_configured": true, 00:20:59.621 "data_offset": 0, 00:20:59.621 "data_size": 65536 00:20:59.621 }, 00:20:59.621 { 00:20:59.621 "name": "BaseBdev2", 00:20:59.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.621 "is_configured": false, 00:20:59.621 "data_offset": 0, 00:20:59.621 "data_size": 0 00:20:59.621 }, 00:20:59.621 { 00:20:59.621 "name": "BaseBdev3", 00:20:59.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.621 "is_configured": false, 00:20:59.621 "data_offset": 0, 00:20:59.621 "data_size": 0 00:20:59.621 }, 00:20:59.621 { 00:20:59.621 "name": "BaseBdev4", 00:20:59.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.621 "is_configured": false, 00:20:59.621 "data_offset": 0, 00:20:59.621 "data_size": 0 00:20:59.621 } 00:20:59.621 ] 00:20:59.621 }' 00:20:59.621 02:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.621 02:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.188 02:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:00.448 [2024-07-11 02:26:50.708725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.448 BaseBdev2 00:21:00.448 02:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:00.448 02:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:00.448 02:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:00.448 02:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:00.448 02:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:00.448 02:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:00.448 02:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:00.707 02:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:00.966 [ 00:21:00.966 { 00:21:00.966 "name": "BaseBdev2", 00:21:00.966 "aliases": [ 00:21:00.966 "b2cbd48e-0f76-41d1-910a-54ee5f04dae7" 00:21:00.966 ], 00:21:00.966 "product_name": "Malloc disk", 00:21:00.966 "block_size": 512, 00:21:00.966 "num_blocks": 65536, 00:21:00.966 "uuid": "b2cbd48e-0f76-41d1-910a-54ee5f04dae7", 00:21:00.966 "assigned_rate_limits": { 00:21:00.966 "rw_ios_per_sec": 0, 00:21:00.966 "rw_mbytes_per_sec": 0, 00:21:00.966 "r_mbytes_per_sec": 0, 00:21:00.966 "w_mbytes_per_sec": 0 00:21:00.966 }, 00:21:00.966 "claimed": true, 00:21:00.966 "claim_type": "exclusive_write", 00:21:00.966 "zoned": false, 00:21:00.966 "supported_io_types": { 00:21:00.966 "read": true, 00:21:00.966 "write": true, 00:21:00.966 "unmap": true, 00:21:00.966 "flush": true, 00:21:00.966 "reset": true, 00:21:00.966 "nvme_admin": false, 00:21:00.966 "nvme_io": false, 00:21:00.966 "nvme_io_md": false, 00:21:00.966 "write_zeroes": true, 00:21:00.966 "zcopy": true, 00:21:00.966 "get_zone_info": false, 00:21:00.966 "zone_management": false, 00:21:00.966 "zone_append": false, 00:21:00.966 "compare": false, 00:21:00.966 "compare_and_write": false, 00:21:00.966 "abort": true, 00:21:00.966 "seek_hole": false, 00:21:00.966 "seek_data": false, 00:21:00.966 "copy": true, 00:21:00.966 "nvme_iov_md": false 00:21:00.966 }, 00:21:00.966 "memory_domains": [ 00:21:00.966 { 00:21:00.966 "dma_device_id": "system", 00:21:00.966 "dma_device_type": 1 00:21:00.966 }, 00:21:00.966 { 00:21:00.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.966 "dma_device_type": 2 00:21:00.966 } 00:21:00.966 ], 00:21:00.966 "driver_specific": {} 00:21:00.966 } 00:21:00.966 ] 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.966 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.226 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.226 "name": "Existed_Raid", 00:21:01.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.226 "strip_size_kb": 64, 00:21:01.226 "state": "configuring", 00:21:01.226 "raid_level": "raid0", 00:21:01.226 "superblock": false, 00:21:01.226 "num_base_bdevs": 4, 00:21:01.226 "num_base_bdevs_discovered": 2, 00:21:01.226 "num_base_bdevs_operational": 4, 00:21:01.226 "base_bdevs_list": [ 00:21:01.226 { 00:21:01.226 "name": "BaseBdev1", 00:21:01.226 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:21:01.226 "is_configured": true, 00:21:01.226 "data_offset": 0, 00:21:01.226 "data_size": 65536 00:21:01.226 }, 00:21:01.226 { 00:21:01.226 "name": "BaseBdev2", 00:21:01.226 "uuid": "b2cbd48e-0f76-41d1-910a-54ee5f04dae7", 00:21:01.226 "is_configured": true, 00:21:01.226 "data_offset": 0, 00:21:01.226 "data_size": 65536 00:21:01.226 }, 00:21:01.226 { 00:21:01.226 "name": "BaseBdev3", 00:21:01.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.226 "is_configured": false, 00:21:01.226 "data_offset": 0, 00:21:01.226 "data_size": 0 00:21:01.226 }, 00:21:01.226 { 00:21:01.226 "name": "BaseBdev4", 00:21:01.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.226 "is_configured": false, 00:21:01.226 "data_offset": 0, 00:21:01.226 "data_size": 0 00:21:01.226 } 00:21:01.226 ] 00:21:01.226 }' 00:21:01.226 02:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.226 02:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.794 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:02.053 [2024-07-11 02:26:52.332297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.053 BaseBdev3 00:21:02.053 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:02.053 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:02.053 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:02.053 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:02.053 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:02.053 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:02.053 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:02.311 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:02.571 [ 00:21:02.571 { 00:21:02.571 "name": "BaseBdev3", 00:21:02.571 "aliases": [ 00:21:02.571 "bed32fb9-63e0-4a4b-9f2e-937febfed6bf" 00:21:02.571 ], 00:21:02.571 "product_name": "Malloc disk", 00:21:02.571 "block_size": 512, 00:21:02.571 "num_blocks": 65536, 00:21:02.571 "uuid": "bed32fb9-63e0-4a4b-9f2e-937febfed6bf", 00:21:02.571 "assigned_rate_limits": { 00:21:02.571 "rw_ios_per_sec": 0, 00:21:02.571 "rw_mbytes_per_sec": 0, 00:21:02.571 "r_mbytes_per_sec": 0, 00:21:02.571 "w_mbytes_per_sec": 0 00:21:02.571 }, 00:21:02.571 "claimed": true, 00:21:02.571 "claim_type": "exclusive_write", 00:21:02.571 "zoned": false, 00:21:02.571 "supported_io_types": { 00:21:02.571 "read": true, 00:21:02.571 "write": true, 00:21:02.571 "unmap": true, 00:21:02.571 "flush": true, 00:21:02.571 "reset": true, 00:21:02.571 "nvme_admin": false, 00:21:02.571 "nvme_io": false, 00:21:02.571 "nvme_io_md": false, 00:21:02.571 "write_zeroes": true, 00:21:02.571 "zcopy": true, 00:21:02.571 "get_zone_info": false, 00:21:02.571 "zone_management": false, 00:21:02.571 "zone_append": false, 00:21:02.571 "compare": false, 00:21:02.571 "compare_and_write": false, 00:21:02.571 "abort": true, 00:21:02.571 "seek_hole": false, 00:21:02.571 "seek_data": false, 00:21:02.571 "copy": true, 00:21:02.571 "nvme_iov_md": false 00:21:02.571 }, 00:21:02.571 "memory_domains": [ 00:21:02.571 { 00:21:02.571 "dma_device_id": "system", 00:21:02.571 "dma_device_type": 1 00:21:02.571 }, 00:21:02.571 { 00:21:02.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.571 "dma_device_type": 2 00:21:02.571 } 00:21:02.571 ], 00:21:02.571 "driver_specific": {} 00:21:02.571 } 00:21:02.571 ] 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.571 02:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.830 02:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.830 "name": "Existed_Raid", 00:21:02.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.830 "strip_size_kb": 64, 00:21:02.830 "state": "configuring", 00:21:02.830 "raid_level": "raid0", 00:21:02.830 "superblock": false, 00:21:02.830 "num_base_bdevs": 4, 00:21:02.830 "num_base_bdevs_discovered": 3, 00:21:02.830 "num_base_bdevs_operational": 4, 00:21:02.830 "base_bdevs_list": [ 00:21:02.830 { 00:21:02.830 "name": "BaseBdev1", 00:21:02.830 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:21:02.830 "is_configured": true, 00:21:02.830 "data_offset": 0, 00:21:02.830 "data_size": 65536 00:21:02.830 }, 00:21:02.830 { 00:21:02.830 "name": "BaseBdev2", 00:21:02.830 "uuid": "b2cbd48e-0f76-41d1-910a-54ee5f04dae7", 00:21:02.830 "is_configured": true, 00:21:02.830 "data_offset": 0, 00:21:02.830 "data_size": 65536 00:21:02.830 }, 00:21:02.830 { 00:21:02.830 "name": "BaseBdev3", 00:21:02.830 "uuid": "bed32fb9-63e0-4a4b-9f2e-937febfed6bf", 00:21:02.830 "is_configured": true, 00:21:02.830 "data_offset": 0, 00:21:02.830 "data_size": 65536 00:21:02.830 }, 00:21:02.830 { 00:21:02.830 "name": "BaseBdev4", 00:21:02.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.830 "is_configured": false, 00:21:02.830 "data_offset": 0, 00:21:02.830 "data_size": 0 00:21:02.830 } 00:21:02.830 ] 00:21:02.830 }' 00:21:02.830 02:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.830 02:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.765 02:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:04.024 [2024-07-11 02:26:54.232623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:04.024 [2024-07-11 02:26:54.232666] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe62d70 00:21:04.024 [2024-07-11 02:26:54.232674] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:04.024 [2024-07-11 02:26:54.232933] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcb82a0 00:21:04.024 [2024-07-11 02:26:54.233053] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe62d70 00:21:04.024 [2024-07-11 02:26:54.233064] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe62d70 00:21:04.024 [2024-07-11 02:26:54.233221] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.024 BaseBdev4 00:21:04.024 02:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:04.024 02:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:04.024 02:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:04.024 02:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:04.024 02:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:04.024 02:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:04.024 02:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.592 02:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:04.592 [ 00:21:04.592 { 00:21:04.592 "name": "BaseBdev4", 00:21:04.592 "aliases": [ 00:21:04.592 "9b5cb203-75ff-42c2-bc0c-205a25f1d2af" 00:21:04.592 ], 00:21:04.592 "product_name": "Malloc disk", 00:21:04.592 "block_size": 512, 00:21:04.592 "num_blocks": 65536, 00:21:04.592 "uuid": "9b5cb203-75ff-42c2-bc0c-205a25f1d2af", 00:21:04.592 "assigned_rate_limits": { 00:21:04.592 "rw_ios_per_sec": 0, 00:21:04.592 "rw_mbytes_per_sec": 0, 00:21:04.592 "r_mbytes_per_sec": 0, 00:21:04.592 "w_mbytes_per_sec": 0 00:21:04.592 }, 00:21:04.592 "claimed": true, 00:21:04.592 "claim_type": "exclusive_write", 00:21:04.592 "zoned": false, 00:21:04.592 "supported_io_types": { 00:21:04.592 "read": true, 00:21:04.592 "write": true, 00:21:04.592 "unmap": true, 00:21:04.592 "flush": true, 00:21:04.592 "reset": true, 00:21:04.592 "nvme_admin": false, 00:21:04.592 "nvme_io": false, 00:21:04.592 "nvme_io_md": false, 00:21:04.592 "write_zeroes": true, 00:21:04.592 "zcopy": true, 00:21:04.592 "get_zone_info": false, 00:21:04.592 "zone_management": false, 00:21:04.592 "zone_append": false, 00:21:04.592 "compare": false, 00:21:04.592 "compare_and_write": false, 00:21:04.593 "abort": true, 00:21:04.593 "seek_hole": false, 00:21:04.593 "seek_data": false, 00:21:04.593 "copy": true, 00:21:04.593 "nvme_iov_md": false 00:21:04.593 }, 00:21:04.593 "memory_domains": [ 00:21:04.593 { 00:21:04.593 "dma_device_id": "system", 00:21:04.593 "dma_device_type": 1 00:21:04.593 }, 00:21:04.593 { 00:21:04.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.593 "dma_device_type": 2 00:21:04.593 } 00:21:04.593 ], 00:21:04.593 "driver_specific": {} 00:21:04.593 } 00:21:04.593 ] 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.852 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.112 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.112 "name": "Existed_Raid", 00:21:05.112 "uuid": "30ca793f-c173-4445-911a-e69bd2400235", 00:21:05.112 "strip_size_kb": 64, 00:21:05.112 "state": "online", 00:21:05.112 "raid_level": "raid0", 00:21:05.112 "superblock": false, 00:21:05.112 "num_base_bdevs": 4, 00:21:05.112 "num_base_bdevs_discovered": 4, 00:21:05.112 "num_base_bdevs_operational": 4, 00:21:05.112 "base_bdevs_list": [ 00:21:05.112 { 00:21:05.112 "name": "BaseBdev1", 00:21:05.112 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:21:05.112 "is_configured": true, 00:21:05.112 "data_offset": 0, 00:21:05.112 "data_size": 65536 00:21:05.112 }, 00:21:05.112 { 00:21:05.112 "name": "BaseBdev2", 00:21:05.112 "uuid": "b2cbd48e-0f76-41d1-910a-54ee5f04dae7", 00:21:05.112 "is_configured": true, 00:21:05.112 "data_offset": 0, 00:21:05.112 "data_size": 65536 00:21:05.112 }, 00:21:05.112 { 00:21:05.112 "name": "BaseBdev3", 00:21:05.112 "uuid": "bed32fb9-63e0-4a4b-9f2e-937febfed6bf", 00:21:05.112 "is_configured": true, 00:21:05.112 "data_offset": 0, 00:21:05.112 "data_size": 65536 00:21:05.112 }, 00:21:05.112 { 00:21:05.112 "name": "BaseBdev4", 00:21:05.112 "uuid": "9b5cb203-75ff-42c2-bc0c-205a25f1d2af", 00:21:05.112 "is_configured": true, 00:21:05.112 "data_offset": 0, 00:21:05.112 "data_size": 65536 00:21:05.112 } 00:21:05.112 ] 00:21:05.112 }' 00:21:05.112 02:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.112 02:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:06.045 [2024-07-11 02:26:56.322472] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:06.045 "name": "Existed_Raid", 00:21:06.045 "aliases": [ 00:21:06.045 "30ca793f-c173-4445-911a-e69bd2400235" 00:21:06.045 ], 00:21:06.045 "product_name": "Raid Volume", 00:21:06.045 "block_size": 512, 00:21:06.045 "num_blocks": 262144, 00:21:06.045 "uuid": "30ca793f-c173-4445-911a-e69bd2400235", 00:21:06.045 "assigned_rate_limits": { 00:21:06.045 "rw_ios_per_sec": 0, 00:21:06.045 "rw_mbytes_per_sec": 0, 00:21:06.045 "r_mbytes_per_sec": 0, 00:21:06.045 "w_mbytes_per_sec": 0 00:21:06.045 }, 00:21:06.045 "claimed": false, 00:21:06.045 "zoned": false, 00:21:06.045 "supported_io_types": { 00:21:06.045 "read": true, 00:21:06.045 "write": true, 00:21:06.045 "unmap": true, 00:21:06.045 "flush": true, 00:21:06.045 "reset": true, 00:21:06.045 "nvme_admin": false, 00:21:06.045 "nvme_io": false, 00:21:06.045 "nvme_io_md": false, 00:21:06.045 "write_zeroes": true, 00:21:06.045 "zcopy": false, 00:21:06.045 "get_zone_info": false, 00:21:06.045 "zone_management": false, 00:21:06.045 "zone_append": false, 00:21:06.045 "compare": false, 00:21:06.045 "compare_and_write": false, 00:21:06.045 "abort": false, 00:21:06.045 "seek_hole": false, 00:21:06.045 "seek_data": false, 00:21:06.045 "copy": false, 00:21:06.045 "nvme_iov_md": false 00:21:06.045 }, 00:21:06.045 "memory_domains": [ 00:21:06.045 { 00:21:06.045 "dma_device_id": "system", 00:21:06.045 "dma_device_type": 1 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.045 "dma_device_type": 2 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "dma_device_id": "system", 00:21:06.045 "dma_device_type": 1 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.045 "dma_device_type": 2 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "dma_device_id": "system", 00:21:06.045 "dma_device_type": 1 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.045 "dma_device_type": 2 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "dma_device_id": "system", 00:21:06.045 "dma_device_type": 1 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.045 "dma_device_type": 2 00:21:06.045 } 00:21:06.045 ], 00:21:06.045 "driver_specific": { 00:21:06.045 "raid": { 00:21:06.045 "uuid": "30ca793f-c173-4445-911a-e69bd2400235", 00:21:06.045 "strip_size_kb": 64, 00:21:06.045 "state": "online", 00:21:06.045 "raid_level": "raid0", 00:21:06.045 "superblock": false, 00:21:06.045 "num_base_bdevs": 4, 00:21:06.045 "num_base_bdevs_discovered": 4, 00:21:06.045 "num_base_bdevs_operational": 4, 00:21:06.045 "base_bdevs_list": [ 00:21:06.045 { 00:21:06.045 "name": "BaseBdev1", 00:21:06.045 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:21:06.045 "is_configured": true, 00:21:06.045 "data_offset": 0, 00:21:06.045 "data_size": 65536 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "name": "BaseBdev2", 00:21:06.045 "uuid": "b2cbd48e-0f76-41d1-910a-54ee5f04dae7", 00:21:06.045 "is_configured": true, 00:21:06.045 "data_offset": 0, 00:21:06.045 "data_size": 65536 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "name": "BaseBdev3", 00:21:06.045 "uuid": "bed32fb9-63e0-4a4b-9f2e-937febfed6bf", 00:21:06.045 "is_configured": true, 00:21:06.045 "data_offset": 0, 00:21:06.045 "data_size": 65536 00:21:06.045 }, 00:21:06.045 { 00:21:06.045 "name": "BaseBdev4", 00:21:06.045 "uuid": "9b5cb203-75ff-42c2-bc0c-205a25f1d2af", 00:21:06.045 "is_configured": true, 00:21:06.045 "data_offset": 0, 00:21:06.045 "data_size": 65536 00:21:06.045 } 00:21:06.045 ] 00:21:06.045 } 00:21:06.045 } 00:21:06.045 }' 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:06.045 BaseBdev2 00:21:06.045 BaseBdev3 00:21:06.045 BaseBdev4' 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:06.045 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:06.610 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.610 "name": "BaseBdev1", 00:21:06.610 "aliases": [ 00:21:06.610 "b58f9a45-a474-4769-8a2e-712a3f127892" 00:21:06.610 ], 00:21:06.610 "product_name": "Malloc disk", 00:21:06.610 "block_size": 512, 00:21:06.610 "num_blocks": 65536, 00:21:06.610 "uuid": "b58f9a45-a474-4769-8a2e-712a3f127892", 00:21:06.610 "assigned_rate_limits": { 00:21:06.610 "rw_ios_per_sec": 0, 00:21:06.610 "rw_mbytes_per_sec": 0, 00:21:06.610 "r_mbytes_per_sec": 0, 00:21:06.610 "w_mbytes_per_sec": 0 00:21:06.610 }, 00:21:06.610 "claimed": true, 00:21:06.610 "claim_type": "exclusive_write", 00:21:06.610 "zoned": false, 00:21:06.610 "supported_io_types": { 00:21:06.610 "read": true, 00:21:06.610 "write": true, 00:21:06.610 "unmap": true, 00:21:06.610 "flush": true, 00:21:06.610 "reset": true, 00:21:06.610 "nvme_admin": false, 00:21:06.610 "nvme_io": false, 00:21:06.610 "nvme_io_md": false, 00:21:06.610 "write_zeroes": true, 00:21:06.610 "zcopy": true, 00:21:06.610 "get_zone_info": false, 00:21:06.610 "zone_management": false, 00:21:06.610 "zone_append": false, 00:21:06.610 "compare": false, 00:21:06.610 "compare_and_write": false, 00:21:06.610 "abort": true, 00:21:06.610 "seek_hole": false, 00:21:06.610 "seek_data": false, 00:21:06.610 "copy": true, 00:21:06.610 "nvme_iov_md": false 00:21:06.610 }, 00:21:06.610 "memory_domains": [ 00:21:06.610 { 00:21:06.610 "dma_device_id": "system", 00:21:06.610 "dma_device_type": 1 00:21:06.610 }, 00:21:06.610 { 00:21:06.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.610 "dma_device_type": 2 00:21:06.610 } 00:21:06.610 ], 00:21:06.610 "driver_specific": {} 00:21:06.610 }' 00:21:06.611 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.611 02:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.611 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.611 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.889 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.889 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.889 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.889 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:07.147 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:07.147 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.147 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.147 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:07.147 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:07.147 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:07.147 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:07.712 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:07.712 "name": "BaseBdev2", 00:21:07.712 "aliases": [ 00:21:07.712 "b2cbd48e-0f76-41d1-910a-54ee5f04dae7" 00:21:07.712 ], 00:21:07.712 "product_name": "Malloc disk", 00:21:07.712 "block_size": 512, 00:21:07.712 "num_blocks": 65536, 00:21:07.712 "uuid": "b2cbd48e-0f76-41d1-910a-54ee5f04dae7", 00:21:07.712 "assigned_rate_limits": { 00:21:07.712 "rw_ios_per_sec": 0, 00:21:07.712 "rw_mbytes_per_sec": 0, 00:21:07.712 "r_mbytes_per_sec": 0, 00:21:07.712 "w_mbytes_per_sec": 0 00:21:07.712 }, 00:21:07.713 "claimed": true, 00:21:07.713 "claim_type": "exclusive_write", 00:21:07.713 "zoned": false, 00:21:07.713 "supported_io_types": { 00:21:07.713 "read": true, 00:21:07.713 "write": true, 00:21:07.713 "unmap": true, 00:21:07.713 "flush": true, 00:21:07.713 "reset": true, 00:21:07.713 "nvme_admin": false, 00:21:07.713 "nvme_io": false, 00:21:07.713 "nvme_io_md": false, 00:21:07.713 "write_zeroes": true, 00:21:07.713 "zcopy": true, 00:21:07.713 "get_zone_info": false, 00:21:07.713 "zone_management": false, 00:21:07.713 "zone_append": false, 00:21:07.713 "compare": false, 00:21:07.713 "compare_and_write": false, 00:21:07.713 "abort": true, 00:21:07.713 "seek_hole": false, 00:21:07.713 "seek_data": false, 00:21:07.713 "copy": true, 00:21:07.713 "nvme_iov_md": false 00:21:07.713 }, 00:21:07.713 "memory_domains": [ 00:21:07.713 { 00:21:07.713 "dma_device_id": "system", 00:21:07.713 "dma_device_type": 1 00:21:07.713 }, 00:21:07.713 { 00:21:07.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.713 "dma_device_type": 2 00:21:07.713 } 00:21:07.713 ], 00:21:07.713 "driver_specific": {} 00:21:07.713 }' 00:21:07.713 02:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.713 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.713 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:07.713 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:07.713 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:07.970 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.228 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.228 "name": "BaseBdev3", 00:21:08.228 "aliases": [ 00:21:08.228 "bed32fb9-63e0-4a4b-9f2e-937febfed6bf" 00:21:08.228 ], 00:21:08.228 "product_name": "Malloc disk", 00:21:08.228 "block_size": 512, 00:21:08.228 "num_blocks": 65536, 00:21:08.228 "uuid": "bed32fb9-63e0-4a4b-9f2e-937febfed6bf", 00:21:08.228 "assigned_rate_limits": { 00:21:08.228 "rw_ios_per_sec": 0, 00:21:08.228 "rw_mbytes_per_sec": 0, 00:21:08.228 "r_mbytes_per_sec": 0, 00:21:08.228 "w_mbytes_per_sec": 0 00:21:08.228 }, 00:21:08.228 "claimed": true, 00:21:08.228 "claim_type": "exclusive_write", 00:21:08.228 "zoned": false, 00:21:08.228 "supported_io_types": { 00:21:08.228 "read": true, 00:21:08.228 "write": true, 00:21:08.228 "unmap": true, 00:21:08.228 "flush": true, 00:21:08.228 "reset": true, 00:21:08.228 "nvme_admin": false, 00:21:08.228 "nvme_io": false, 00:21:08.228 "nvme_io_md": false, 00:21:08.228 "write_zeroes": true, 00:21:08.228 "zcopy": true, 00:21:08.228 "get_zone_info": false, 00:21:08.228 "zone_management": false, 00:21:08.228 "zone_append": false, 00:21:08.228 "compare": false, 00:21:08.228 "compare_and_write": false, 00:21:08.228 "abort": true, 00:21:08.228 "seek_hole": false, 00:21:08.228 "seek_data": false, 00:21:08.228 "copy": true, 00:21:08.228 "nvme_iov_md": false 00:21:08.228 }, 00:21:08.228 "memory_domains": [ 00:21:08.228 { 00:21:08.228 "dma_device_id": "system", 00:21:08.228 "dma_device_type": 1 00:21:08.228 }, 00:21:08.228 { 00:21:08.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.228 "dma_device_type": 2 00:21:08.228 } 00:21:08.228 ], 00:21:08.228 "driver_specific": {} 00:21:08.228 }' 00:21:08.228 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.228 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.487 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.746 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:08.746 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:08.746 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:08.746 02:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.746 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.746 "name": "BaseBdev4", 00:21:08.746 "aliases": [ 00:21:08.746 "9b5cb203-75ff-42c2-bc0c-205a25f1d2af" 00:21:08.746 ], 00:21:08.746 "product_name": "Malloc disk", 00:21:08.746 "block_size": 512, 00:21:08.746 "num_blocks": 65536, 00:21:08.746 "uuid": "9b5cb203-75ff-42c2-bc0c-205a25f1d2af", 00:21:08.746 "assigned_rate_limits": { 00:21:08.746 "rw_ios_per_sec": 0, 00:21:08.746 "rw_mbytes_per_sec": 0, 00:21:08.746 "r_mbytes_per_sec": 0, 00:21:08.746 "w_mbytes_per_sec": 0 00:21:08.746 }, 00:21:08.746 "claimed": true, 00:21:08.746 "claim_type": "exclusive_write", 00:21:08.746 "zoned": false, 00:21:08.746 "supported_io_types": { 00:21:08.746 "read": true, 00:21:08.746 "write": true, 00:21:08.746 "unmap": true, 00:21:08.746 "flush": true, 00:21:08.746 "reset": true, 00:21:08.746 "nvme_admin": false, 00:21:08.746 "nvme_io": false, 00:21:08.746 "nvme_io_md": false, 00:21:08.746 "write_zeroes": true, 00:21:08.746 "zcopy": true, 00:21:08.746 "get_zone_info": false, 00:21:08.746 "zone_management": false, 00:21:08.746 "zone_append": false, 00:21:08.746 "compare": false, 00:21:08.746 "compare_and_write": false, 00:21:08.746 "abort": true, 00:21:08.746 "seek_hole": false, 00:21:08.746 "seek_data": false, 00:21:08.746 "copy": true, 00:21:08.746 "nvme_iov_md": false 00:21:08.746 }, 00:21:08.746 "memory_domains": [ 00:21:08.746 { 00:21:08.746 "dma_device_id": "system", 00:21:08.746 "dma_device_type": 1 00:21:08.746 }, 00:21:08.746 { 00:21:08.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.746 "dma_device_type": 2 00:21:08.746 } 00:21:08.746 ], 00:21:08.746 "driver_specific": {} 00:21:08.746 }' 00:21:08.746 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.005 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.005 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:09.005 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.005 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.005 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.005 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.264 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.264 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.264 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.264 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.264 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.264 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:09.523 [2024-07-11 02:26:59.811495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:09.523 [2024-07-11 02:26:59.811521] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:09.523 [2024-07-11 02:26:59.811567] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.523 02:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.781 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.781 "name": "Existed_Raid", 00:21:09.781 "uuid": "30ca793f-c173-4445-911a-e69bd2400235", 00:21:09.781 "strip_size_kb": 64, 00:21:09.781 "state": "offline", 00:21:09.781 "raid_level": "raid0", 00:21:09.781 "superblock": false, 00:21:09.781 "num_base_bdevs": 4, 00:21:09.781 "num_base_bdevs_discovered": 3, 00:21:09.781 "num_base_bdevs_operational": 3, 00:21:09.781 "base_bdevs_list": [ 00:21:09.781 { 00:21:09.781 "name": null, 00:21:09.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.781 "is_configured": false, 00:21:09.781 "data_offset": 0, 00:21:09.781 "data_size": 65536 00:21:09.781 }, 00:21:09.781 { 00:21:09.781 "name": "BaseBdev2", 00:21:09.781 "uuid": "b2cbd48e-0f76-41d1-910a-54ee5f04dae7", 00:21:09.781 "is_configured": true, 00:21:09.781 "data_offset": 0, 00:21:09.781 "data_size": 65536 00:21:09.781 }, 00:21:09.781 { 00:21:09.781 "name": "BaseBdev3", 00:21:09.781 "uuid": "bed32fb9-63e0-4a4b-9f2e-937febfed6bf", 00:21:09.781 "is_configured": true, 00:21:09.781 "data_offset": 0, 00:21:09.781 "data_size": 65536 00:21:09.781 }, 00:21:09.781 { 00:21:09.781 "name": "BaseBdev4", 00:21:09.781 "uuid": "9b5cb203-75ff-42c2-bc0c-205a25f1d2af", 00:21:09.781 "is_configured": true, 00:21:09.781 "data_offset": 0, 00:21:09.781 "data_size": 65536 00:21:09.782 } 00:21:09.782 ] 00:21:09.782 }' 00:21:09.782 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.782 02:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.349 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:10.349 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:10.349 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.349 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:10.612 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:10.612 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:10.612 02:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:11.243 [2024-07-11 02:27:01.456923] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.243 02:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:11.243 02:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:11.243 02:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.243 02:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:11.810 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:11.810 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:11.810 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:12.378 [2024-07-11 02:27:02.503812] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:12.378 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:12.378 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.378 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.378 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:12.378 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:12.378 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:12.378 02:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:12.947 [2024-07-11 02:27:03.073586] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:12.947 [2024-07-11 02:27:03.073627] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe62d70 name Existed_Raid, state offline 00:21:12.947 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:12.947 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.947 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.947 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:13.206 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:13.206 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:13.206 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:13.206 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:13.206 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:13.206 02:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.773 BaseBdev2 00:21:13.773 02:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:13.773 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:13.773 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:13.773 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:13.773 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:13.773 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:13.773 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.032 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:14.291 [ 00:21:14.291 { 00:21:14.291 "name": "BaseBdev2", 00:21:14.291 "aliases": [ 00:21:14.291 "1067a30e-a860-4d70-a9ec-bec1f9be040b" 00:21:14.291 ], 00:21:14.291 "product_name": "Malloc disk", 00:21:14.291 "block_size": 512, 00:21:14.291 "num_blocks": 65536, 00:21:14.291 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:14.291 "assigned_rate_limits": { 00:21:14.291 "rw_ios_per_sec": 0, 00:21:14.291 "rw_mbytes_per_sec": 0, 00:21:14.291 "r_mbytes_per_sec": 0, 00:21:14.291 "w_mbytes_per_sec": 0 00:21:14.291 }, 00:21:14.291 "claimed": false, 00:21:14.291 "zoned": false, 00:21:14.291 "supported_io_types": { 00:21:14.291 "read": true, 00:21:14.291 "write": true, 00:21:14.291 "unmap": true, 00:21:14.291 "flush": true, 00:21:14.291 "reset": true, 00:21:14.291 "nvme_admin": false, 00:21:14.291 "nvme_io": false, 00:21:14.291 "nvme_io_md": false, 00:21:14.291 "write_zeroes": true, 00:21:14.291 "zcopy": true, 00:21:14.291 "get_zone_info": false, 00:21:14.291 "zone_management": false, 00:21:14.291 "zone_append": false, 00:21:14.291 "compare": false, 00:21:14.291 "compare_and_write": false, 00:21:14.291 "abort": true, 00:21:14.291 "seek_hole": false, 00:21:14.291 "seek_data": false, 00:21:14.291 "copy": true, 00:21:14.291 "nvme_iov_md": false 00:21:14.291 }, 00:21:14.291 "memory_domains": [ 00:21:14.291 { 00:21:14.291 "dma_device_id": "system", 00:21:14.291 "dma_device_type": 1 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.291 "dma_device_type": 2 00:21:14.291 } 00:21:14.291 ], 00:21:14.291 "driver_specific": {} 00:21:14.291 } 00:21:14.291 ] 00:21:14.291 02:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:14.291 02:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:14.291 02:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:14.291 02:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.859 BaseBdev3 00:21:14.859 02:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:14.859 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:14.859 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:14.859 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:14.859 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:14.859 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:14.859 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:15.426 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:15.685 [ 00:21:15.685 { 00:21:15.685 "name": "BaseBdev3", 00:21:15.685 "aliases": [ 00:21:15.685 "1173657f-e59a-4548-a0a0-48680cac76ba" 00:21:15.685 ], 00:21:15.685 "product_name": "Malloc disk", 00:21:15.685 "block_size": 512, 00:21:15.685 "num_blocks": 65536, 00:21:15.685 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:15.685 "assigned_rate_limits": { 00:21:15.685 "rw_ios_per_sec": 0, 00:21:15.685 "rw_mbytes_per_sec": 0, 00:21:15.685 "r_mbytes_per_sec": 0, 00:21:15.685 "w_mbytes_per_sec": 0 00:21:15.685 }, 00:21:15.685 "claimed": false, 00:21:15.685 "zoned": false, 00:21:15.685 "supported_io_types": { 00:21:15.685 "read": true, 00:21:15.685 "write": true, 00:21:15.685 "unmap": true, 00:21:15.685 "flush": true, 00:21:15.685 "reset": true, 00:21:15.685 "nvme_admin": false, 00:21:15.685 "nvme_io": false, 00:21:15.685 "nvme_io_md": false, 00:21:15.685 "write_zeroes": true, 00:21:15.685 "zcopy": true, 00:21:15.685 "get_zone_info": false, 00:21:15.685 "zone_management": false, 00:21:15.685 "zone_append": false, 00:21:15.685 "compare": false, 00:21:15.685 "compare_and_write": false, 00:21:15.685 "abort": true, 00:21:15.685 "seek_hole": false, 00:21:15.685 "seek_data": false, 00:21:15.685 "copy": true, 00:21:15.685 "nvme_iov_md": false 00:21:15.685 }, 00:21:15.685 "memory_domains": [ 00:21:15.685 { 00:21:15.685 "dma_device_id": "system", 00:21:15.685 "dma_device_type": 1 00:21:15.685 }, 00:21:15.685 { 00:21:15.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.685 "dma_device_type": 2 00:21:15.685 } 00:21:15.685 ], 00:21:15.685 "driver_specific": {} 00:21:15.685 } 00:21:15.685 ] 00:21:15.685 02:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:15.685 02:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:15.685 02:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:15.685 02:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:15.976 BaseBdev4 00:21:15.976 02:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:15.976 02:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:15.976 02:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:15.976 02:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:15.976 02:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:15.976 02:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:15.976 02:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:16.542 02:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:17.111 [ 00:21:17.112 { 00:21:17.112 "name": "BaseBdev4", 00:21:17.112 "aliases": [ 00:21:17.112 "823b392e-7715-4e41-8fa3-7a2144897936" 00:21:17.112 ], 00:21:17.112 "product_name": "Malloc disk", 00:21:17.112 "block_size": 512, 00:21:17.112 "num_blocks": 65536, 00:21:17.112 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:17.112 "assigned_rate_limits": { 00:21:17.112 "rw_ios_per_sec": 0, 00:21:17.112 "rw_mbytes_per_sec": 0, 00:21:17.112 "r_mbytes_per_sec": 0, 00:21:17.112 "w_mbytes_per_sec": 0 00:21:17.112 }, 00:21:17.112 "claimed": false, 00:21:17.112 "zoned": false, 00:21:17.112 "supported_io_types": { 00:21:17.112 "read": true, 00:21:17.112 "write": true, 00:21:17.112 "unmap": true, 00:21:17.112 "flush": true, 00:21:17.112 "reset": true, 00:21:17.112 "nvme_admin": false, 00:21:17.112 "nvme_io": false, 00:21:17.112 "nvme_io_md": false, 00:21:17.112 "write_zeroes": true, 00:21:17.112 "zcopy": true, 00:21:17.112 "get_zone_info": false, 00:21:17.112 "zone_management": false, 00:21:17.112 "zone_append": false, 00:21:17.112 "compare": false, 00:21:17.112 "compare_and_write": false, 00:21:17.112 "abort": true, 00:21:17.112 "seek_hole": false, 00:21:17.112 "seek_data": false, 00:21:17.112 "copy": true, 00:21:17.112 "nvme_iov_md": false 00:21:17.112 }, 00:21:17.112 "memory_domains": [ 00:21:17.112 { 00:21:17.112 "dma_device_id": "system", 00:21:17.112 "dma_device_type": 1 00:21:17.112 }, 00:21:17.112 { 00:21:17.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.112 "dma_device_type": 2 00:21:17.112 } 00:21:17.112 ], 00:21:17.112 "driver_specific": {} 00:21:17.112 } 00:21:17.112 ] 00:21:17.112 02:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:17.112 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:17.112 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:17.112 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:17.371 [2024-07-11 02:27:07.767986] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:17.371 [2024-07-11 02:27:07.768026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:17.371 [2024-07-11 02:27:07.768046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.371 [2024-07-11 02:27:07.769375] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:17.371 [2024-07-11 02:27:07.769415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.630 02:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.197 02:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.197 "name": "Existed_Raid", 00:21:18.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.197 "strip_size_kb": 64, 00:21:18.197 "state": "configuring", 00:21:18.197 "raid_level": "raid0", 00:21:18.197 "superblock": false, 00:21:18.197 "num_base_bdevs": 4, 00:21:18.197 "num_base_bdevs_discovered": 3, 00:21:18.197 "num_base_bdevs_operational": 4, 00:21:18.197 "base_bdevs_list": [ 00:21:18.197 { 00:21:18.197 "name": "BaseBdev1", 00:21:18.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.197 "is_configured": false, 00:21:18.197 "data_offset": 0, 00:21:18.197 "data_size": 0 00:21:18.197 }, 00:21:18.197 { 00:21:18.197 "name": "BaseBdev2", 00:21:18.197 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:18.197 "is_configured": true, 00:21:18.197 "data_offset": 0, 00:21:18.197 "data_size": 65536 00:21:18.197 }, 00:21:18.197 { 00:21:18.197 "name": "BaseBdev3", 00:21:18.197 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:18.197 "is_configured": true, 00:21:18.197 "data_offset": 0, 00:21:18.197 "data_size": 65536 00:21:18.197 }, 00:21:18.197 { 00:21:18.197 "name": "BaseBdev4", 00:21:18.197 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:18.197 "is_configured": true, 00:21:18.197 "data_offset": 0, 00:21:18.197 "data_size": 65536 00:21:18.197 } 00:21:18.197 ] 00:21:18.197 }' 00:21:18.197 02:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.197 02:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.764 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:19.022 [2024-07-11 02:27:09.232048] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.022 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.281 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.281 "name": "Existed_Raid", 00:21:19.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.281 "strip_size_kb": 64, 00:21:19.281 "state": "configuring", 00:21:19.281 "raid_level": "raid0", 00:21:19.281 "superblock": false, 00:21:19.281 "num_base_bdevs": 4, 00:21:19.281 "num_base_bdevs_discovered": 2, 00:21:19.281 "num_base_bdevs_operational": 4, 00:21:19.281 "base_bdevs_list": [ 00:21:19.281 { 00:21:19.281 "name": "BaseBdev1", 00:21:19.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.281 "is_configured": false, 00:21:19.281 "data_offset": 0, 00:21:19.281 "data_size": 0 00:21:19.281 }, 00:21:19.281 { 00:21:19.281 "name": null, 00:21:19.281 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:19.281 "is_configured": false, 00:21:19.281 "data_offset": 0, 00:21:19.281 "data_size": 65536 00:21:19.281 }, 00:21:19.281 { 00:21:19.281 "name": "BaseBdev3", 00:21:19.281 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:19.281 "is_configured": true, 00:21:19.281 "data_offset": 0, 00:21:19.281 "data_size": 65536 00:21:19.281 }, 00:21:19.281 { 00:21:19.281 "name": "BaseBdev4", 00:21:19.281 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:19.281 "is_configured": true, 00:21:19.281 "data_offset": 0, 00:21:19.281 "data_size": 65536 00:21:19.281 } 00:21:19.281 ] 00:21:19.281 }' 00:21:19.281 02:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.281 02:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.217 02:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.217 02:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:20.217 02:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:20.217 02:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:20.476 [2024-07-11 02:27:10.860855] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.476 BaseBdev1 00:21:20.476 02:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:20.476 02:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:20.476 02:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:20.476 02:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:20.476 02:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:20.476 02:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:20.476 02:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:20.735 02:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:20.994 [ 00:21:20.994 { 00:21:20.994 "name": "BaseBdev1", 00:21:20.994 "aliases": [ 00:21:20.994 "2785f37e-b9ce-463b-93f9-6542f773bacd" 00:21:20.994 ], 00:21:20.994 "product_name": "Malloc disk", 00:21:20.994 "block_size": 512, 00:21:20.994 "num_blocks": 65536, 00:21:20.994 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:20.994 "assigned_rate_limits": { 00:21:20.994 "rw_ios_per_sec": 0, 00:21:20.994 "rw_mbytes_per_sec": 0, 00:21:20.994 "r_mbytes_per_sec": 0, 00:21:20.994 "w_mbytes_per_sec": 0 00:21:20.994 }, 00:21:20.994 "claimed": true, 00:21:20.994 "claim_type": "exclusive_write", 00:21:20.994 "zoned": false, 00:21:20.994 "supported_io_types": { 00:21:20.994 "read": true, 00:21:20.994 "write": true, 00:21:20.994 "unmap": true, 00:21:20.994 "flush": true, 00:21:20.994 "reset": true, 00:21:20.994 "nvme_admin": false, 00:21:20.994 "nvme_io": false, 00:21:20.994 "nvme_io_md": false, 00:21:20.994 "write_zeroes": true, 00:21:20.994 "zcopy": true, 00:21:20.994 "get_zone_info": false, 00:21:20.994 "zone_management": false, 00:21:20.994 "zone_append": false, 00:21:20.994 "compare": false, 00:21:20.994 "compare_and_write": false, 00:21:20.994 "abort": true, 00:21:20.994 "seek_hole": false, 00:21:20.994 "seek_data": false, 00:21:20.994 "copy": true, 00:21:20.994 "nvme_iov_md": false 00:21:20.994 }, 00:21:20.994 "memory_domains": [ 00:21:20.994 { 00:21:20.994 "dma_device_id": "system", 00:21:20.994 "dma_device_type": 1 00:21:20.994 }, 00:21:20.994 { 00:21:20.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.994 "dma_device_type": 2 00:21:20.994 } 00:21:20.994 ], 00:21:20.994 "driver_specific": {} 00:21:20.994 } 00:21:20.994 ] 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.994 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.254 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.254 "name": "Existed_Raid", 00:21:21.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.254 "strip_size_kb": 64, 00:21:21.254 "state": "configuring", 00:21:21.254 "raid_level": "raid0", 00:21:21.254 "superblock": false, 00:21:21.254 "num_base_bdevs": 4, 00:21:21.254 "num_base_bdevs_discovered": 3, 00:21:21.254 "num_base_bdevs_operational": 4, 00:21:21.254 "base_bdevs_list": [ 00:21:21.254 { 00:21:21.254 "name": "BaseBdev1", 00:21:21.254 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:21.254 "is_configured": true, 00:21:21.254 "data_offset": 0, 00:21:21.254 "data_size": 65536 00:21:21.254 }, 00:21:21.254 { 00:21:21.254 "name": null, 00:21:21.254 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:21.254 "is_configured": false, 00:21:21.254 "data_offset": 0, 00:21:21.254 "data_size": 65536 00:21:21.254 }, 00:21:21.254 { 00:21:21.254 "name": "BaseBdev3", 00:21:21.254 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:21.254 "is_configured": true, 00:21:21.254 "data_offset": 0, 00:21:21.254 "data_size": 65536 00:21:21.254 }, 00:21:21.254 { 00:21:21.254 "name": "BaseBdev4", 00:21:21.254 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:21.254 "is_configured": true, 00:21:21.254 "data_offset": 0, 00:21:21.254 "data_size": 65536 00:21:21.254 } 00:21:21.254 ] 00:21:21.254 }' 00:21:21.254 02:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.254 02:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.191 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.191 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:22.450 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:22.450 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:22.709 [2024-07-11 02:27:12.966489] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.709 02:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.284 02:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.284 "name": "Existed_Raid", 00:21:23.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.284 "strip_size_kb": 64, 00:21:23.284 "state": "configuring", 00:21:23.284 "raid_level": "raid0", 00:21:23.284 "superblock": false, 00:21:23.284 "num_base_bdevs": 4, 00:21:23.284 "num_base_bdevs_discovered": 2, 00:21:23.284 "num_base_bdevs_operational": 4, 00:21:23.284 "base_bdevs_list": [ 00:21:23.284 { 00:21:23.284 "name": "BaseBdev1", 00:21:23.284 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:23.284 "is_configured": true, 00:21:23.284 "data_offset": 0, 00:21:23.284 "data_size": 65536 00:21:23.284 }, 00:21:23.284 { 00:21:23.284 "name": null, 00:21:23.284 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:23.284 "is_configured": false, 00:21:23.284 "data_offset": 0, 00:21:23.284 "data_size": 65536 00:21:23.284 }, 00:21:23.284 { 00:21:23.284 "name": null, 00:21:23.284 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:23.284 "is_configured": false, 00:21:23.284 "data_offset": 0, 00:21:23.284 "data_size": 65536 00:21:23.284 }, 00:21:23.284 { 00:21:23.284 "name": "BaseBdev4", 00:21:23.284 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:23.284 "is_configured": true, 00:21:23.284 "data_offset": 0, 00:21:23.284 "data_size": 65536 00:21:23.284 } 00:21:23.284 ] 00:21:23.284 }' 00:21:23.284 02:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.284 02:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.856 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.856 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:24.115 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:24.115 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:24.374 [2024-07-11 02:27:14.542684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.374 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.633 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.633 "name": "Existed_Raid", 00:21:24.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.633 "strip_size_kb": 64, 00:21:24.633 "state": "configuring", 00:21:24.633 "raid_level": "raid0", 00:21:24.633 "superblock": false, 00:21:24.633 "num_base_bdevs": 4, 00:21:24.633 "num_base_bdevs_discovered": 3, 00:21:24.633 "num_base_bdevs_operational": 4, 00:21:24.633 "base_bdevs_list": [ 00:21:24.633 { 00:21:24.633 "name": "BaseBdev1", 00:21:24.633 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:24.633 "is_configured": true, 00:21:24.633 "data_offset": 0, 00:21:24.633 "data_size": 65536 00:21:24.633 }, 00:21:24.633 { 00:21:24.633 "name": null, 00:21:24.633 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:24.633 "is_configured": false, 00:21:24.633 "data_offset": 0, 00:21:24.633 "data_size": 65536 00:21:24.633 }, 00:21:24.633 { 00:21:24.633 "name": "BaseBdev3", 00:21:24.633 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:24.633 "is_configured": true, 00:21:24.633 "data_offset": 0, 00:21:24.633 "data_size": 65536 00:21:24.633 }, 00:21:24.633 { 00:21:24.633 "name": "BaseBdev4", 00:21:24.633 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:24.633 "is_configured": true, 00:21:24.633 "data_offset": 0, 00:21:24.633 "data_size": 65536 00:21:24.633 } 00:21:24.633 ] 00:21:24.633 }' 00:21:24.633 02:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.633 02:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.201 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.201 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:25.201 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:25.201 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:25.460 [2024-07-11 02:27:15.802028] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.460 02:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.720 02:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.720 "name": "Existed_Raid", 00:21:25.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.720 "strip_size_kb": 64, 00:21:25.720 "state": "configuring", 00:21:25.720 "raid_level": "raid0", 00:21:25.720 "superblock": false, 00:21:25.720 "num_base_bdevs": 4, 00:21:25.720 "num_base_bdevs_discovered": 2, 00:21:25.720 "num_base_bdevs_operational": 4, 00:21:25.720 "base_bdevs_list": [ 00:21:25.720 { 00:21:25.720 "name": null, 00:21:25.720 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:25.720 "is_configured": false, 00:21:25.720 "data_offset": 0, 00:21:25.720 "data_size": 65536 00:21:25.720 }, 00:21:25.720 { 00:21:25.720 "name": null, 00:21:25.720 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:25.720 "is_configured": false, 00:21:25.720 "data_offset": 0, 00:21:25.720 "data_size": 65536 00:21:25.720 }, 00:21:25.720 { 00:21:25.720 "name": "BaseBdev3", 00:21:25.720 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:25.720 "is_configured": true, 00:21:25.720 "data_offset": 0, 00:21:25.720 "data_size": 65536 00:21:25.720 }, 00:21:25.720 { 00:21:25.720 "name": "BaseBdev4", 00:21:25.720 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:25.720 "is_configured": true, 00:21:25.720 "data_offset": 0, 00:21:25.720 "data_size": 65536 00:21:25.720 } 00:21:25.720 ] 00:21:25.720 }' 00:21:25.720 02:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.720 02:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.289 02:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.289 02:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:26.548 02:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:26.548 02:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:26.808 [2024-07-11 02:27:17.176163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.808 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.067 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.067 "name": "Existed_Raid", 00:21:27.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.067 "strip_size_kb": 64, 00:21:27.067 "state": "configuring", 00:21:27.067 "raid_level": "raid0", 00:21:27.067 "superblock": false, 00:21:27.067 "num_base_bdevs": 4, 00:21:27.067 "num_base_bdevs_discovered": 3, 00:21:27.067 "num_base_bdevs_operational": 4, 00:21:27.067 "base_bdevs_list": [ 00:21:27.067 { 00:21:27.067 "name": null, 00:21:27.067 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:27.067 "is_configured": false, 00:21:27.067 "data_offset": 0, 00:21:27.067 "data_size": 65536 00:21:27.067 }, 00:21:27.067 { 00:21:27.067 "name": "BaseBdev2", 00:21:27.067 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:27.067 "is_configured": true, 00:21:27.067 "data_offset": 0, 00:21:27.067 "data_size": 65536 00:21:27.067 }, 00:21:27.067 { 00:21:27.067 "name": "BaseBdev3", 00:21:27.067 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:27.067 "is_configured": true, 00:21:27.067 "data_offset": 0, 00:21:27.067 "data_size": 65536 00:21:27.067 }, 00:21:27.067 { 00:21:27.067 "name": "BaseBdev4", 00:21:27.067 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:27.067 "is_configured": true, 00:21:27.067 "data_offset": 0, 00:21:27.067 "data_size": 65536 00:21:27.067 } 00:21:27.067 ] 00:21:27.067 }' 00:21:27.067 02:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.067 02:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.013 02:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.013 02:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:28.013 02:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:28.013 02:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.013 02:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:28.271 02:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2785f37e-b9ce-463b-93f9-6542f773bacd 00:21:28.530 [2024-07-11 02:27:18.807882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:28.530 [2024-07-11 02:27:18.807917] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcaf880 00:21:28.530 [2024-07-11 02:27:18.807926] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:28.530 [2024-07-11 02:27:18.808114] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc9cb50 00:21:28.530 [2024-07-11 02:27:18.808227] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcaf880 00:21:28.530 [2024-07-11 02:27:18.808237] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xcaf880 00:21:28.530 [2024-07-11 02:27:18.808391] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.530 NewBaseBdev 00:21:28.530 02:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:28.530 02:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:28.530 02:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:28.530 02:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:28.530 02:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:28.530 02:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:28.530 02:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:28.789 02:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:29.049 [ 00:21:29.049 { 00:21:29.049 "name": "NewBaseBdev", 00:21:29.049 "aliases": [ 00:21:29.049 "2785f37e-b9ce-463b-93f9-6542f773bacd" 00:21:29.049 ], 00:21:29.049 "product_name": "Malloc disk", 00:21:29.049 "block_size": 512, 00:21:29.049 "num_blocks": 65536, 00:21:29.049 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:29.049 "assigned_rate_limits": { 00:21:29.049 "rw_ios_per_sec": 0, 00:21:29.049 "rw_mbytes_per_sec": 0, 00:21:29.049 "r_mbytes_per_sec": 0, 00:21:29.049 "w_mbytes_per_sec": 0 00:21:29.049 }, 00:21:29.049 "claimed": true, 00:21:29.049 "claim_type": "exclusive_write", 00:21:29.049 "zoned": false, 00:21:29.049 "supported_io_types": { 00:21:29.049 "read": true, 00:21:29.049 "write": true, 00:21:29.049 "unmap": true, 00:21:29.049 "flush": true, 00:21:29.049 "reset": true, 00:21:29.049 "nvme_admin": false, 00:21:29.049 "nvme_io": false, 00:21:29.049 "nvme_io_md": false, 00:21:29.049 "write_zeroes": true, 00:21:29.049 "zcopy": true, 00:21:29.049 "get_zone_info": false, 00:21:29.049 "zone_management": false, 00:21:29.049 "zone_append": false, 00:21:29.049 "compare": false, 00:21:29.049 "compare_and_write": false, 00:21:29.049 "abort": true, 00:21:29.049 "seek_hole": false, 00:21:29.049 "seek_data": false, 00:21:29.050 "copy": true, 00:21:29.050 "nvme_iov_md": false 00:21:29.050 }, 00:21:29.050 "memory_domains": [ 00:21:29.050 { 00:21:29.050 "dma_device_id": "system", 00:21:29.050 "dma_device_type": 1 00:21:29.050 }, 00:21:29.050 { 00:21:29.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.050 "dma_device_type": 2 00:21:29.050 } 00:21:29.050 ], 00:21:29.050 "driver_specific": {} 00:21:29.050 } 00:21:29.050 ] 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.050 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.310 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.310 "name": "Existed_Raid", 00:21:29.310 "uuid": "3fc227cd-b95d-4800-bcf1-5ca51039d965", 00:21:29.310 "strip_size_kb": 64, 00:21:29.310 "state": "online", 00:21:29.310 "raid_level": "raid0", 00:21:29.310 "superblock": false, 00:21:29.310 "num_base_bdevs": 4, 00:21:29.310 "num_base_bdevs_discovered": 4, 00:21:29.310 "num_base_bdevs_operational": 4, 00:21:29.310 "base_bdevs_list": [ 00:21:29.310 { 00:21:29.310 "name": "NewBaseBdev", 00:21:29.310 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:29.310 "is_configured": true, 00:21:29.310 "data_offset": 0, 00:21:29.310 "data_size": 65536 00:21:29.310 }, 00:21:29.310 { 00:21:29.310 "name": "BaseBdev2", 00:21:29.310 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:29.310 "is_configured": true, 00:21:29.310 "data_offset": 0, 00:21:29.310 "data_size": 65536 00:21:29.310 }, 00:21:29.310 { 00:21:29.310 "name": "BaseBdev3", 00:21:29.310 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:29.310 "is_configured": true, 00:21:29.310 "data_offset": 0, 00:21:29.310 "data_size": 65536 00:21:29.310 }, 00:21:29.310 { 00:21:29.310 "name": "BaseBdev4", 00:21:29.310 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:29.310 "is_configured": true, 00:21:29.310 "data_offset": 0, 00:21:29.310 "data_size": 65536 00:21:29.310 } 00:21:29.310 ] 00:21:29.310 }' 00:21:29.310 02:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.310 02:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.887 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:29.887 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:29.887 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:29.887 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:29.887 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:29.887 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:29.887 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:29.888 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:30.147 [2024-07-11 02:27:20.412478] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.147 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:30.147 "name": "Existed_Raid", 00:21:30.147 "aliases": [ 00:21:30.147 "3fc227cd-b95d-4800-bcf1-5ca51039d965" 00:21:30.147 ], 00:21:30.147 "product_name": "Raid Volume", 00:21:30.147 "block_size": 512, 00:21:30.147 "num_blocks": 262144, 00:21:30.147 "uuid": "3fc227cd-b95d-4800-bcf1-5ca51039d965", 00:21:30.147 "assigned_rate_limits": { 00:21:30.147 "rw_ios_per_sec": 0, 00:21:30.147 "rw_mbytes_per_sec": 0, 00:21:30.147 "r_mbytes_per_sec": 0, 00:21:30.147 "w_mbytes_per_sec": 0 00:21:30.147 }, 00:21:30.147 "claimed": false, 00:21:30.147 "zoned": false, 00:21:30.147 "supported_io_types": { 00:21:30.147 "read": true, 00:21:30.147 "write": true, 00:21:30.147 "unmap": true, 00:21:30.147 "flush": true, 00:21:30.147 "reset": true, 00:21:30.147 "nvme_admin": false, 00:21:30.147 "nvme_io": false, 00:21:30.147 "nvme_io_md": false, 00:21:30.147 "write_zeroes": true, 00:21:30.147 "zcopy": false, 00:21:30.147 "get_zone_info": false, 00:21:30.147 "zone_management": false, 00:21:30.147 "zone_append": false, 00:21:30.147 "compare": false, 00:21:30.147 "compare_and_write": false, 00:21:30.147 "abort": false, 00:21:30.147 "seek_hole": false, 00:21:30.147 "seek_data": false, 00:21:30.147 "copy": false, 00:21:30.147 "nvme_iov_md": false 00:21:30.147 }, 00:21:30.147 "memory_domains": [ 00:21:30.147 { 00:21:30.147 "dma_device_id": "system", 00:21:30.147 "dma_device_type": 1 00:21:30.147 }, 00:21:30.147 { 00:21:30.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.147 "dma_device_type": 2 00:21:30.147 }, 00:21:30.147 { 00:21:30.147 "dma_device_id": "system", 00:21:30.147 "dma_device_type": 1 00:21:30.147 }, 00:21:30.147 { 00:21:30.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.147 "dma_device_type": 2 00:21:30.147 }, 00:21:30.147 { 00:21:30.147 "dma_device_id": "system", 00:21:30.147 "dma_device_type": 1 00:21:30.147 }, 00:21:30.147 { 00:21:30.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.147 "dma_device_type": 2 00:21:30.147 }, 00:21:30.147 { 00:21:30.147 "dma_device_id": "system", 00:21:30.147 "dma_device_type": 1 00:21:30.147 }, 00:21:30.147 { 00:21:30.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.147 "dma_device_type": 2 00:21:30.147 } 00:21:30.147 ], 00:21:30.147 "driver_specific": { 00:21:30.147 "raid": { 00:21:30.147 "uuid": "3fc227cd-b95d-4800-bcf1-5ca51039d965", 00:21:30.147 "strip_size_kb": 64, 00:21:30.147 "state": "online", 00:21:30.147 "raid_level": "raid0", 00:21:30.147 "superblock": false, 00:21:30.147 "num_base_bdevs": 4, 00:21:30.147 "num_base_bdevs_discovered": 4, 00:21:30.147 "num_base_bdevs_operational": 4, 00:21:30.147 "base_bdevs_list": [ 00:21:30.147 { 00:21:30.147 "name": "NewBaseBdev", 00:21:30.147 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:30.147 "is_configured": true, 00:21:30.147 "data_offset": 0, 00:21:30.148 "data_size": 65536 00:21:30.148 }, 00:21:30.148 { 00:21:30.148 "name": "BaseBdev2", 00:21:30.148 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:30.148 "is_configured": true, 00:21:30.148 "data_offset": 0, 00:21:30.148 "data_size": 65536 00:21:30.148 }, 00:21:30.148 { 00:21:30.148 "name": "BaseBdev3", 00:21:30.148 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:30.148 "is_configured": true, 00:21:30.148 "data_offset": 0, 00:21:30.148 "data_size": 65536 00:21:30.148 }, 00:21:30.148 { 00:21:30.148 "name": "BaseBdev4", 00:21:30.148 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:30.148 "is_configured": true, 00:21:30.148 "data_offset": 0, 00:21:30.148 "data_size": 65536 00:21:30.148 } 00:21:30.148 ] 00:21:30.148 } 00:21:30.148 } 00:21:30.148 }' 00:21:30.148 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:30.148 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:30.148 BaseBdev2 00:21:30.148 BaseBdev3 00:21:30.148 BaseBdev4' 00:21:30.148 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:30.148 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:30.148 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:30.407 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:30.407 "name": "NewBaseBdev", 00:21:30.407 "aliases": [ 00:21:30.407 "2785f37e-b9ce-463b-93f9-6542f773bacd" 00:21:30.407 ], 00:21:30.407 "product_name": "Malloc disk", 00:21:30.407 "block_size": 512, 00:21:30.407 "num_blocks": 65536, 00:21:30.407 "uuid": "2785f37e-b9ce-463b-93f9-6542f773bacd", 00:21:30.407 "assigned_rate_limits": { 00:21:30.407 "rw_ios_per_sec": 0, 00:21:30.407 "rw_mbytes_per_sec": 0, 00:21:30.407 "r_mbytes_per_sec": 0, 00:21:30.407 "w_mbytes_per_sec": 0 00:21:30.407 }, 00:21:30.407 "claimed": true, 00:21:30.407 "claim_type": "exclusive_write", 00:21:30.407 "zoned": false, 00:21:30.407 "supported_io_types": { 00:21:30.407 "read": true, 00:21:30.407 "write": true, 00:21:30.407 "unmap": true, 00:21:30.407 "flush": true, 00:21:30.407 "reset": true, 00:21:30.407 "nvme_admin": false, 00:21:30.407 "nvme_io": false, 00:21:30.407 "nvme_io_md": false, 00:21:30.407 "write_zeroes": true, 00:21:30.407 "zcopy": true, 00:21:30.407 "get_zone_info": false, 00:21:30.407 "zone_management": false, 00:21:30.407 "zone_append": false, 00:21:30.407 "compare": false, 00:21:30.407 "compare_and_write": false, 00:21:30.407 "abort": true, 00:21:30.407 "seek_hole": false, 00:21:30.407 "seek_data": false, 00:21:30.407 "copy": true, 00:21:30.407 "nvme_iov_md": false 00:21:30.407 }, 00:21:30.407 "memory_domains": [ 00:21:30.407 { 00:21:30.407 "dma_device_id": "system", 00:21:30.407 "dma_device_type": 1 00:21:30.407 }, 00:21:30.407 { 00:21:30.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.407 "dma_device_type": 2 00:21:30.407 } 00:21:30.407 ], 00:21:30.407 "driver_specific": {} 00:21:30.407 }' 00:21:30.407 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.407 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.407 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:30.407 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.666 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.666 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:30.666 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.666 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.666 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:30.666 02:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.666 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.666 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:30.666 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:30.666 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:30.666 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:30.926 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:30.926 "name": "BaseBdev2", 00:21:30.926 "aliases": [ 00:21:30.926 "1067a30e-a860-4d70-a9ec-bec1f9be040b" 00:21:30.926 ], 00:21:30.926 "product_name": "Malloc disk", 00:21:30.926 "block_size": 512, 00:21:30.926 "num_blocks": 65536, 00:21:30.926 "uuid": "1067a30e-a860-4d70-a9ec-bec1f9be040b", 00:21:30.926 "assigned_rate_limits": { 00:21:30.926 "rw_ios_per_sec": 0, 00:21:30.926 "rw_mbytes_per_sec": 0, 00:21:30.926 "r_mbytes_per_sec": 0, 00:21:30.926 "w_mbytes_per_sec": 0 00:21:30.926 }, 00:21:30.926 "claimed": true, 00:21:30.926 "claim_type": "exclusive_write", 00:21:30.926 "zoned": false, 00:21:30.926 "supported_io_types": { 00:21:30.926 "read": true, 00:21:30.926 "write": true, 00:21:30.926 "unmap": true, 00:21:30.926 "flush": true, 00:21:30.926 "reset": true, 00:21:30.926 "nvme_admin": false, 00:21:30.926 "nvme_io": false, 00:21:30.926 "nvme_io_md": false, 00:21:30.926 "write_zeroes": true, 00:21:30.926 "zcopy": true, 00:21:30.926 "get_zone_info": false, 00:21:30.926 "zone_management": false, 00:21:30.926 "zone_append": false, 00:21:30.926 "compare": false, 00:21:30.926 "compare_and_write": false, 00:21:30.926 "abort": true, 00:21:30.926 "seek_hole": false, 00:21:30.926 "seek_data": false, 00:21:30.926 "copy": true, 00:21:30.926 "nvme_iov_md": false 00:21:30.926 }, 00:21:30.926 "memory_domains": [ 00:21:30.926 { 00:21:30.926 "dma_device_id": "system", 00:21:30.926 "dma_device_type": 1 00:21:30.926 }, 00:21:30.926 { 00:21:30.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.926 "dma_device_type": 2 00:21:30.926 } 00:21:30.926 ], 00:21:30.926 "driver_specific": {} 00:21:30.926 }' 00:21:30.926 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:31.185 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.444 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.444 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:31.444 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:31.444 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:31.444 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:31.704 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:31.704 "name": "BaseBdev3", 00:21:31.704 "aliases": [ 00:21:31.704 "1173657f-e59a-4548-a0a0-48680cac76ba" 00:21:31.704 ], 00:21:31.704 "product_name": "Malloc disk", 00:21:31.704 "block_size": 512, 00:21:31.704 "num_blocks": 65536, 00:21:31.704 "uuid": "1173657f-e59a-4548-a0a0-48680cac76ba", 00:21:31.704 "assigned_rate_limits": { 00:21:31.704 "rw_ios_per_sec": 0, 00:21:31.704 "rw_mbytes_per_sec": 0, 00:21:31.704 "r_mbytes_per_sec": 0, 00:21:31.704 "w_mbytes_per_sec": 0 00:21:31.704 }, 00:21:31.704 "claimed": true, 00:21:31.704 "claim_type": "exclusive_write", 00:21:31.704 "zoned": false, 00:21:31.704 "supported_io_types": { 00:21:31.704 "read": true, 00:21:31.704 "write": true, 00:21:31.704 "unmap": true, 00:21:31.704 "flush": true, 00:21:31.704 "reset": true, 00:21:31.704 "nvme_admin": false, 00:21:31.704 "nvme_io": false, 00:21:31.704 "nvme_io_md": false, 00:21:31.704 "write_zeroes": true, 00:21:31.704 "zcopy": true, 00:21:31.704 "get_zone_info": false, 00:21:31.704 "zone_management": false, 00:21:31.704 "zone_append": false, 00:21:31.704 "compare": false, 00:21:31.704 "compare_and_write": false, 00:21:31.704 "abort": true, 00:21:31.704 "seek_hole": false, 00:21:31.704 "seek_data": false, 00:21:31.704 "copy": true, 00:21:31.704 "nvme_iov_md": false 00:21:31.704 }, 00:21:31.704 "memory_domains": [ 00:21:31.704 { 00:21:31.704 "dma_device_id": "system", 00:21:31.704 "dma_device_type": 1 00:21:31.704 }, 00:21:31.704 { 00:21:31.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.704 "dma_device_type": 2 00:21:31.704 } 00:21:31.704 ], 00:21:31.704 "driver_specific": {} 00:21:31.704 }' 00:21:31.704 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.704 02:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.704 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:31.704 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.704 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.704 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:31.704 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:31.964 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:32.223 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:32.223 "name": "BaseBdev4", 00:21:32.223 "aliases": [ 00:21:32.223 "823b392e-7715-4e41-8fa3-7a2144897936" 00:21:32.223 ], 00:21:32.223 "product_name": "Malloc disk", 00:21:32.223 "block_size": 512, 00:21:32.223 "num_blocks": 65536, 00:21:32.223 "uuid": "823b392e-7715-4e41-8fa3-7a2144897936", 00:21:32.223 "assigned_rate_limits": { 00:21:32.223 "rw_ios_per_sec": 0, 00:21:32.223 "rw_mbytes_per_sec": 0, 00:21:32.223 "r_mbytes_per_sec": 0, 00:21:32.223 "w_mbytes_per_sec": 0 00:21:32.223 }, 00:21:32.223 "claimed": true, 00:21:32.223 "claim_type": "exclusive_write", 00:21:32.223 "zoned": false, 00:21:32.223 "supported_io_types": { 00:21:32.223 "read": true, 00:21:32.223 "write": true, 00:21:32.223 "unmap": true, 00:21:32.223 "flush": true, 00:21:32.223 "reset": true, 00:21:32.223 "nvme_admin": false, 00:21:32.223 "nvme_io": false, 00:21:32.223 "nvme_io_md": false, 00:21:32.223 "write_zeroes": true, 00:21:32.223 "zcopy": true, 00:21:32.223 "get_zone_info": false, 00:21:32.223 "zone_management": false, 00:21:32.223 "zone_append": false, 00:21:32.223 "compare": false, 00:21:32.223 "compare_and_write": false, 00:21:32.223 "abort": true, 00:21:32.223 "seek_hole": false, 00:21:32.223 "seek_data": false, 00:21:32.223 "copy": true, 00:21:32.223 "nvme_iov_md": false 00:21:32.223 }, 00:21:32.223 "memory_domains": [ 00:21:32.223 { 00:21:32.223 "dma_device_id": "system", 00:21:32.223 "dma_device_type": 1 00:21:32.223 }, 00:21:32.223 { 00:21:32.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.223 "dma_device_type": 2 00:21:32.223 } 00:21:32.223 ], 00:21:32.223 "driver_specific": {} 00:21:32.223 }' 00:21:32.223 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.223 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.223 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:32.223 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.482 02:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:32.742 [2024-07-11 02:27:23.111344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:32.742 [2024-07-11 02:27:23.111372] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:32.742 [2024-07-11 02:27:23.111425] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.742 [2024-07-11 02:27:23.111484] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.742 [2024-07-11 02:27:23.111501] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcaf880 name Existed_Raid, state offline 00:21:32.742 02:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1954810 00:21:32.742 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1954810 ']' 00:21:32.742 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1954810 00:21:32.742 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:21:32.742 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.742 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1954810 00:21:33.001 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:33.001 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:33.001 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1954810' 00:21:33.001 killing process with pid 1954810 00:21:33.001 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1954810 00:21:33.001 [2024-07-11 02:27:23.179270] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.001 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1954810 00:21:33.001 [2024-07-11 02:27:23.221958] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:33.261 02:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:33.261 00:21:33.261 real 0m38.253s 00:21:33.261 user 1m10.330s 00:21:33.261 sys 0m6.734s 00:21:33.261 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.261 02:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.261 ************************************ 00:21:33.261 END TEST raid_state_function_test 00:21:33.261 ************************************ 00:21:33.261 02:27:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:33.261 02:27:23 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:21:33.261 02:27:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:33.261 02:27:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.261 02:27:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.261 ************************************ 00:21:33.261 START TEST raid_state_function_test_sb 00:21:33.261 ************************************ 00:21:33.261 02:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:21:33.261 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:33.261 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:33.261 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1960919 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1960919' 00:21:33.262 Process raid pid: 1960919 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1960919 /var/tmp/spdk-raid.sock 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1960919 ']' 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:33.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.262 02:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.262 [2024-07-11 02:27:23.596047] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:21:33.262 [2024-07-11 02:27:23.596119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.520 [2024-07-11 02:27:23.735154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.520 [2024-07-11 02:27:23.785077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.520 [2024-07-11 02:27:23.843743] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.520 [2024-07-11 02:27:23.843780] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.456 02:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.456 02:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:21:34.456 02:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:34.715 [2024-07-11 02:27:25.013915] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:34.715 [2024-07-11 02:27:25.013962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:34.715 [2024-07-11 02:27:25.013973] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.715 [2024-07-11 02:27:25.013985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.715 [2024-07-11 02:27:25.013994] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:34.715 [2024-07-11 02:27:25.014005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:34.715 [2024-07-11 02:27:25.014014] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:34.715 [2024-07-11 02:27:25.014026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.715 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.975 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.975 "name": "Existed_Raid", 00:21:34.975 "uuid": "d2889dd1-b42a-4c32-9a7b-c360b73b1faf", 00:21:34.975 "strip_size_kb": 64, 00:21:34.975 "state": "configuring", 00:21:34.975 "raid_level": "raid0", 00:21:34.975 "superblock": true, 00:21:34.975 "num_base_bdevs": 4, 00:21:34.975 "num_base_bdevs_discovered": 0, 00:21:34.975 "num_base_bdevs_operational": 4, 00:21:34.975 "base_bdevs_list": [ 00:21:34.975 { 00:21:34.975 "name": "BaseBdev1", 00:21:34.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.975 "is_configured": false, 00:21:34.975 "data_offset": 0, 00:21:34.975 "data_size": 0 00:21:34.975 }, 00:21:34.975 { 00:21:34.975 "name": "BaseBdev2", 00:21:34.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.975 "is_configured": false, 00:21:34.975 "data_offset": 0, 00:21:34.975 "data_size": 0 00:21:34.975 }, 00:21:34.975 { 00:21:34.975 "name": "BaseBdev3", 00:21:34.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.975 "is_configured": false, 00:21:34.975 "data_offset": 0, 00:21:34.975 "data_size": 0 00:21:34.975 }, 00:21:34.975 { 00:21:34.975 "name": "BaseBdev4", 00:21:34.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.975 "is_configured": false, 00:21:34.975 "data_offset": 0, 00:21:34.975 "data_size": 0 00:21:34.975 } 00:21:34.975 ] 00:21:34.975 }' 00:21:34.975 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.975 02:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.541 02:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:35.799 [2024-07-11 02:27:26.104629] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.799 [2024-07-11 02:27:26.104660] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15425a0 name Existed_Raid, state configuring 00:21:35.799 02:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:36.058 [2024-07-11 02:27:26.357330] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:36.058 [2024-07-11 02:27:26.357359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:36.058 [2024-07-11 02:27:26.357369] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.058 [2024-07-11 02:27:26.357381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.058 [2024-07-11 02:27:26.357390] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:36.058 [2024-07-11 02:27:26.357401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:36.058 [2024-07-11 02:27:26.357410] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:36.058 [2024-07-11 02:27:26.357421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:36.058 02:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:36.318 [2024-07-11 02:27:26.615653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.318 BaseBdev1 00:21:36.318 02:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:36.318 02:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:36.318 02:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:36.318 02:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:36.318 02:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:36.318 02:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:36.318 02:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.609 02:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:36.907 [ 00:21:36.907 { 00:21:36.907 "name": "BaseBdev1", 00:21:36.907 "aliases": [ 00:21:36.907 "3e053c46-5fca-4d00-a78d-742795dc9d6c" 00:21:36.907 ], 00:21:36.907 "product_name": "Malloc disk", 00:21:36.907 "block_size": 512, 00:21:36.907 "num_blocks": 65536, 00:21:36.907 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:36.907 "assigned_rate_limits": { 00:21:36.907 "rw_ios_per_sec": 0, 00:21:36.907 "rw_mbytes_per_sec": 0, 00:21:36.907 "r_mbytes_per_sec": 0, 00:21:36.907 "w_mbytes_per_sec": 0 00:21:36.907 }, 00:21:36.907 "claimed": true, 00:21:36.907 "claim_type": "exclusive_write", 00:21:36.907 "zoned": false, 00:21:36.907 "supported_io_types": { 00:21:36.907 "read": true, 00:21:36.907 "write": true, 00:21:36.907 "unmap": true, 00:21:36.907 "flush": true, 00:21:36.907 "reset": true, 00:21:36.907 "nvme_admin": false, 00:21:36.907 "nvme_io": false, 00:21:36.907 "nvme_io_md": false, 00:21:36.907 "write_zeroes": true, 00:21:36.907 "zcopy": true, 00:21:36.907 "get_zone_info": false, 00:21:36.907 "zone_management": false, 00:21:36.907 "zone_append": false, 00:21:36.907 "compare": false, 00:21:36.907 "compare_and_write": false, 00:21:36.907 "abort": true, 00:21:36.907 "seek_hole": false, 00:21:36.907 "seek_data": false, 00:21:36.907 "copy": true, 00:21:36.907 "nvme_iov_md": false 00:21:36.907 }, 00:21:36.907 "memory_domains": [ 00:21:36.907 { 00:21:36.907 "dma_device_id": "system", 00:21:36.907 "dma_device_type": 1 00:21:36.907 }, 00:21:36.907 { 00:21:36.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.907 "dma_device_type": 2 00:21:36.907 } 00:21:36.907 ], 00:21:36.907 "driver_specific": {} 00:21:36.907 } 00:21:36.907 ] 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.907 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.181 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.181 "name": "Existed_Raid", 00:21:37.181 "uuid": "a8ab887a-1bca-4082-a8ca-e5ab0f64abee", 00:21:37.181 "strip_size_kb": 64, 00:21:37.181 "state": "configuring", 00:21:37.181 "raid_level": "raid0", 00:21:37.181 "superblock": true, 00:21:37.181 "num_base_bdevs": 4, 00:21:37.181 "num_base_bdevs_discovered": 1, 00:21:37.181 "num_base_bdevs_operational": 4, 00:21:37.181 "base_bdevs_list": [ 00:21:37.181 { 00:21:37.181 "name": "BaseBdev1", 00:21:37.181 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:37.181 "is_configured": true, 00:21:37.181 "data_offset": 2048, 00:21:37.181 "data_size": 63488 00:21:37.181 }, 00:21:37.181 { 00:21:37.181 "name": "BaseBdev2", 00:21:37.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.181 "is_configured": false, 00:21:37.181 "data_offset": 0, 00:21:37.181 "data_size": 0 00:21:37.181 }, 00:21:37.181 { 00:21:37.181 "name": "BaseBdev3", 00:21:37.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.181 "is_configured": false, 00:21:37.181 "data_offset": 0, 00:21:37.181 "data_size": 0 00:21:37.181 }, 00:21:37.181 { 00:21:37.181 "name": "BaseBdev4", 00:21:37.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.181 "is_configured": false, 00:21:37.181 "data_offset": 0, 00:21:37.181 "data_size": 0 00:21:37.181 } 00:21:37.181 ] 00:21:37.181 }' 00:21:37.181 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.181 02:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.750 02:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:37.750 [2024-07-11 02:27:28.151706] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.750 [2024-07-11 02:27:28.151746] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1541ed0 name Existed_Raid, state configuring 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:38.010 [2024-07-11 02:27:28.332234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.010 [2024-07-11 02:27:28.333632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.010 [2024-07-11 02:27:28.333661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.010 [2024-07-11 02:27:28.333672] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.010 [2024-07-11 02:27:28.333684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.010 [2024-07-11 02:27:28.333693] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:38.010 [2024-07-11 02:27:28.333704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.010 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:38.011 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.011 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.011 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.011 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.011 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.011 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.269 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.269 "name": "Existed_Raid", 00:21:38.269 "uuid": "b3eaf612-57d5-4697-9980-4b09ba2fd7c2", 00:21:38.269 "strip_size_kb": 64, 00:21:38.269 "state": "configuring", 00:21:38.269 "raid_level": "raid0", 00:21:38.269 "superblock": true, 00:21:38.269 "num_base_bdevs": 4, 00:21:38.269 "num_base_bdevs_discovered": 1, 00:21:38.269 "num_base_bdevs_operational": 4, 00:21:38.269 "base_bdevs_list": [ 00:21:38.269 { 00:21:38.269 "name": "BaseBdev1", 00:21:38.269 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:38.269 "is_configured": true, 00:21:38.269 "data_offset": 2048, 00:21:38.269 "data_size": 63488 00:21:38.269 }, 00:21:38.269 { 00:21:38.269 "name": "BaseBdev2", 00:21:38.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.269 "is_configured": false, 00:21:38.269 "data_offset": 0, 00:21:38.269 "data_size": 0 00:21:38.269 }, 00:21:38.269 { 00:21:38.269 "name": "BaseBdev3", 00:21:38.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.269 "is_configured": false, 00:21:38.269 "data_offset": 0, 00:21:38.269 "data_size": 0 00:21:38.269 }, 00:21:38.269 { 00:21:38.269 "name": "BaseBdev4", 00:21:38.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.269 "is_configured": false, 00:21:38.269 "data_offset": 0, 00:21:38.269 "data_size": 0 00:21:38.269 } 00:21:38.269 ] 00:21:38.269 }' 00:21:38.269 02:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.269 02:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.837 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:39.094 [2024-07-11 02:27:29.399237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.094 BaseBdev2 00:21:39.094 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:39.094 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:39.094 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:39.094 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:39.094 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:39.094 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:39.094 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.352 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:39.611 [ 00:21:39.611 { 00:21:39.611 "name": "BaseBdev2", 00:21:39.611 "aliases": [ 00:21:39.611 "35ca430c-98eb-4f80-88be-a6d079aa6e06" 00:21:39.611 ], 00:21:39.611 "product_name": "Malloc disk", 00:21:39.611 "block_size": 512, 00:21:39.611 "num_blocks": 65536, 00:21:39.611 "uuid": "35ca430c-98eb-4f80-88be-a6d079aa6e06", 00:21:39.611 "assigned_rate_limits": { 00:21:39.611 "rw_ios_per_sec": 0, 00:21:39.611 "rw_mbytes_per_sec": 0, 00:21:39.611 "r_mbytes_per_sec": 0, 00:21:39.611 "w_mbytes_per_sec": 0 00:21:39.611 }, 00:21:39.611 "claimed": true, 00:21:39.611 "claim_type": "exclusive_write", 00:21:39.611 "zoned": false, 00:21:39.611 "supported_io_types": { 00:21:39.611 "read": true, 00:21:39.611 "write": true, 00:21:39.611 "unmap": true, 00:21:39.611 "flush": true, 00:21:39.611 "reset": true, 00:21:39.611 "nvme_admin": false, 00:21:39.611 "nvme_io": false, 00:21:39.611 "nvme_io_md": false, 00:21:39.611 "write_zeroes": true, 00:21:39.611 "zcopy": true, 00:21:39.611 "get_zone_info": false, 00:21:39.611 "zone_management": false, 00:21:39.611 "zone_append": false, 00:21:39.612 "compare": false, 00:21:39.612 "compare_and_write": false, 00:21:39.612 "abort": true, 00:21:39.612 "seek_hole": false, 00:21:39.612 "seek_data": false, 00:21:39.612 "copy": true, 00:21:39.612 "nvme_iov_md": false 00:21:39.612 }, 00:21:39.612 "memory_domains": [ 00:21:39.612 { 00:21:39.612 "dma_device_id": "system", 00:21:39.612 "dma_device_type": 1 00:21:39.612 }, 00:21:39.612 { 00:21:39.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.612 "dma_device_type": 2 00:21:39.612 } 00:21:39.612 ], 00:21:39.612 "driver_specific": {} 00:21:39.612 } 00:21:39.612 ] 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.612 02:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.870 02:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.870 "name": "Existed_Raid", 00:21:39.871 "uuid": "b3eaf612-57d5-4697-9980-4b09ba2fd7c2", 00:21:39.871 "strip_size_kb": 64, 00:21:39.871 "state": "configuring", 00:21:39.871 "raid_level": "raid0", 00:21:39.871 "superblock": true, 00:21:39.871 "num_base_bdevs": 4, 00:21:39.871 "num_base_bdevs_discovered": 2, 00:21:39.871 "num_base_bdevs_operational": 4, 00:21:39.871 "base_bdevs_list": [ 00:21:39.871 { 00:21:39.871 "name": "BaseBdev1", 00:21:39.871 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:39.871 "is_configured": true, 00:21:39.871 "data_offset": 2048, 00:21:39.871 "data_size": 63488 00:21:39.871 }, 00:21:39.871 { 00:21:39.871 "name": "BaseBdev2", 00:21:39.871 "uuid": "35ca430c-98eb-4f80-88be-a6d079aa6e06", 00:21:39.871 "is_configured": true, 00:21:39.871 "data_offset": 2048, 00:21:39.871 "data_size": 63488 00:21:39.871 }, 00:21:39.871 { 00:21:39.871 "name": "BaseBdev3", 00:21:39.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.871 "is_configured": false, 00:21:39.871 "data_offset": 0, 00:21:39.871 "data_size": 0 00:21:39.871 }, 00:21:39.871 { 00:21:39.871 "name": "BaseBdev4", 00:21:39.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.871 "is_configured": false, 00:21:39.871 "data_offset": 0, 00:21:39.871 "data_size": 0 00:21:39.871 } 00:21:39.871 ] 00:21:39.871 }' 00:21:39.871 02:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.871 02:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.436 02:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:40.694 [2024-07-11 02:27:30.982945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.694 BaseBdev3 00:21:40.694 02:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:40.694 02:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:40.694 02:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:40.694 02:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:40.694 02:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:40.694 02:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:40.694 02:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.952 02:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:41.209 [ 00:21:41.209 { 00:21:41.209 "name": "BaseBdev3", 00:21:41.209 "aliases": [ 00:21:41.209 "aaddd156-1440-4b77-8cc8-516bdae0f9b3" 00:21:41.209 ], 00:21:41.209 "product_name": "Malloc disk", 00:21:41.209 "block_size": 512, 00:21:41.209 "num_blocks": 65536, 00:21:41.209 "uuid": "aaddd156-1440-4b77-8cc8-516bdae0f9b3", 00:21:41.209 "assigned_rate_limits": { 00:21:41.209 "rw_ios_per_sec": 0, 00:21:41.209 "rw_mbytes_per_sec": 0, 00:21:41.209 "r_mbytes_per_sec": 0, 00:21:41.209 "w_mbytes_per_sec": 0 00:21:41.209 }, 00:21:41.209 "claimed": true, 00:21:41.209 "claim_type": "exclusive_write", 00:21:41.209 "zoned": false, 00:21:41.209 "supported_io_types": { 00:21:41.209 "read": true, 00:21:41.209 "write": true, 00:21:41.209 "unmap": true, 00:21:41.209 "flush": true, 00:21:41.209 "reset": true, 00:21:41.209 "nvme_admin": false, 00:21:41.209 "nvme_io": false, 00:21:41.209 "nvme_io_md": false, 00:21:41.209 "write_zeroes": true, 00:21:41.209 "zcopy": true, 00:21:41.209 "get_zone_info": false, 00:21:41.209 "zone_management": false, 00:21:41.209 "zone_append": false, 00:21:41.209 "compare": false, 00:21:41.209 "compare_and_write": false, 00:21:41.209 "abort": true, 00:21:41.209 "seek_hole": false, 00:21:41.210 "seek_data": false, 00:21:41.210 "copy": true, 00:21:41.210 "nvme_iov_md": false 00:21:41.210 }, 00:21:41.210 "memory_domains": [ 00:21:41.210 { 00:21:41.210 "dma_device_id": "system", 00:21:41.210 "dma_device_type": 1 00:21:41.210 }, 00:21:41.210 { 00:21:41.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.210 "dma_device_type": 2 00:21:41.210 } 00:21:41.210 ], 00:21:41.210 "driver_specific": {} 00:21:41.210 } 00:21:41.210 ] 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.210 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.468 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.468 "name": "Existed_Raid", 00:21:41.468 "uuid": "b3eaf612-57d5-4697-9980-4b09ba2fd7c2", 00:21:41.468 "strip_size_kb": 64, 00:21:41.468 "state": "configuring", 00:21:41.468 "raid_level": "raid0", 00:21:41.468 "superblock": true, 00:21:41.468 "num_base_bdevs": 4, 00:21:41.468 "num_base_bdevs_discovered": 3, 00:21:41.468 "num_base_bdevs_operational": 4, 00:21:41.468 "base_bdevs_list": [ 00:21:41.468 { 00:21:41.468 "name": "BaseBdev1", 00:21:41.468 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:41.468 "is_configured": true, 00:21:41.468 "data_offset": 2048, 00:21:41.468 "data_size": 63488 00:21:41.468 }, 00:21:41.468 { 00:21:41.468 "name": "BaseBdev2", 00:21:41.468 "uuid": "35ca430c-98eb-4f80-88be-a6d079aa6e06", 00:21:41.468 "is_configured": true, 00:21:41.468 "data_offset": 2048, 00:21:41.468 "data_size": 63488 00:21:41.468 }, 00:21:41.468 { 00:21:41.468 "name": "BaseBdev3", 00:21:41.468 "uuid": "aaddd156-1440-4b77-8cc8-516bdae0f9b3", 00:21:41.468 "is_configured": true, 00:21:41.468 "data_offset": 2048, 00:21:41.468 "data_size": 63488 00:21:41.468 }, 00:21:41.468 { 00:21:41.468 "name": "BaseBdev4", 00:21:41.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.468 "is_configured": false, 00:21:41.468 "data_offset": 0, 00:21:41.468 "data_size": 0 00:21:41.468 } 00:21:41.468 ] 00:21:41.468 }' 00:21:41.468 02:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.468 02:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.033 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:42.292 [2024-07-11 02:27:32.574520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:42.292 [2024-07-11 02:27:32.574684] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16f4d70 00:21:42.292 [2024-07-11 02:27:32.574698] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:42.292 [2024-07-11 02:27:32.574879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15471f0 00:21:42.292 [2024-07-11 02:27:32.575004] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16f4d70 00:21:42.292 [2024-07-11 02:27:32.575015] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16f4d70 00:21:42.292 [2024-07-11 02:27:32.575107] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.292 BaseBdev4 00:21:42.292 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:42.292 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:42.292 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:42.292 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:42.292 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:42.292 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:42.292 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:42.551 [ 00:21:42.551 { 00:21:42.551 "name": "BaseBdev4", 00:21:42.551 "aliases": [ 00:21:42.551 "a04d5c29-488f-4d46-9c4d-0bfe562ddbdc" 00:21:42.551 ], 00:21:42.551 "product_name": "Malloc disk", 00:21:42.551 "block_size": 512, 00:21:42.551 "num_blocks": 65536, 00:21:42.551 "uuid": "a04d5c29-488f-4d46-9c4d-0bfe562ddbdc", 00:21:42.551 "assigned_rate_limits": { 00:21:42.551 "rw_ios_per_sec": 0, 00:21:42.551 "rw_mbytes_per_sec": 0, 00:21:42.551 "r_mbytes_per_sec": 0, 00:21:42.551 "w_mbytes_per_sec": 0 00:21:42.551 }, 00:21:42.551 "claimed": true, 00:21:42.551 "claim_type": "exclusive_write", 00:21:42.551 "zoned": false, 00:21:42.551 "supported_io_types": { 00:21:42.551 "read": true, 00:21:42.551 "write": true, 00:21:42.551 "unmap": true, 00:21:42.551 "flush": true, 00:21:42.551 "reset": true, 00:21:42.551 "nvme_admin": false, 00:21:42.551 "nvme_io": false, 00:21:42.551 "nvme_io_md": false, 00:21:42.551 "write_zeroes": true, 00:21:42.551 "zcopy": true, 00:21:42.551 "get_zone_info": false, 00:21:42.551 "zone_management": false, 00:21:42.551 "zone_append": false, 00:21:42.551 "compare": false, 00:21:42.551 "compare_and_write": false, 00:21:42.551 "abort": true, 00:21:42.551 "seek_hole": false, 00:21:42.551 "seek_data": false, 00:21:42.551 "copy": true, 00:21:42.551 "nvme_iov_md": false 00:21:42.551 }, 00:21:42.551 "memory_domains": [ 00:21:42.551 { 00:21:42.551 "dma_device_id": "system", 00:21:42.551 "dma_device_type": 1 00:21:42.551 }, 00:21:42.551 { 00:21:42.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.551 "dma_device_type": 2 00:21:42.551 } 00:21:42.551 ], 00:21:42.551 "driver_specific": {} 00:21:42.551 } 00:21:42.551 ] 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.551 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.810 02:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.810 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.810 "name": "Existed_Raid", 00:21:42.810 "uuid": "b3eaf612-57d5-4697-9980-4b09ba2fd7c2", 00:21:42.810 "strip_size_kb": 64, 00:21:42.810 "state": "online", 00:21:42.810 "raid_level": "raid0", 00:21:42.810 "superblock": true, 00:21:42.810 "num_base_bdevs": 4, 00:21:42.810 "num_base_bdevs_discovered": 4, 00:21:42.810 "num_base_bdevs_operational": 4, 00:21:42.810 "base_bdevs_list": [ 00:21:42.810 { 00:21:42.810 "name": "BaseBdev1", 00:21:42.810 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:42.810 "is_configured": true, 00:21:42.810 "data_offset": 2048, 00:21:42.810 "data_size": 63488 00:21:42.810 }, 00:21:42.810 { 00:21:42.810 "name": "BaseBdev2", 00:21:42.810 "uuid": "35ca430c-98eb-4f80-88be-a6d079aa6e06", 00:21:42.810 "is_configured": true, 00:21:42.810 "data_offset": 2048, 00:21:42.810 "data_size": 63488 00:21:42.810 }, 00:21:42.810 { 00:21:42.810 "name": "BaseBdev3", 00:21:42.810 "uuid": "aaddd156-1440-4b77-8cc8-516bdae0f9b3", 00:21:42.810 "is_configured": true, 00:21:42.810 "data_offset": 2048, 00:21:42.810 "data_size": 63488 00:21:42.810 }, 00:21:42.810 { 00:21:42.810 "name": "BaseBdev4", 00:21:42.810 "uuid": "a04d5c29-488f-4d46-9c4d-0bfe562ddbdc", 00:21:42.810 "is_configured": true, 00:21:42.810 "data_offset": 2048, 00:21:42.810 "data_size": 63488 00:21:42.810 } 00:21:42.810 ] 00:21:42.810 }' 00:21:42.810 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.810 02:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:43.746 [2024-07-11 02:27:33.966566] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:43.746 "name": "Existed_Raid", 00:21:43.746 "aliases": [ 00:21:43.746 "b3eaf612-57d5-4697-9980-4b09ba2fd7c2" 00:21:43.746 ], 00:21:43.746 "product_name": "Raid Volume", 00:21:43.746 "block_size": 512, 00:21:43.746 "num_blocks": 253952, 00:21:43.746 "uuid": "b3eaf612-57d5-4697-9980-4b09ba2fd7c2", 00:21:43.746 "assigned_rate_limits": { 00:21:43.746 "rw_ios_per_sec": 0, 00:21:43.746 "rw_mbytes_per_sec": 0, 00:21:43.746 "r_mbytes_per_sec": 0, 00:21:43.746 "w_mbytes_per_sec": 0 00:21:43.746 }, 00:21:43.746 "claimed": false, 00:21:43.746 "zoned": false, 00:21:43.746 "supported_io_types": { 00:21:43.746 "read": true, 00:21:43.746 "write": true, 00:21:43.746 "unmap": true, 00:21:43.746 "flush": true, 00:21:43.746 "reset": true, 00:21:43.746 "nvme_admin": false, 00:21:43.746 "nvme_io": false, 00:21:43.746 "nvme_io_md": false, 00:21:43.746 "write_zeroes": true, 00:21:43.746 "zcopy": false, 00:21:43.746 "get_zone_info": false, 00:21:43.746 "zone_management": false, 00:21:43.746 "zone_append": false, 00:21:43.746 "compare": false, 00:21:43.746 "compare_and_write": false, 00:21:43.746 "abort": false, 00:21:43.746 "seek_hole": false, 00:21:43.746 "seek_data": false, 00:21:43.746 "copy": false, 00:21:43.746 "nvme_iov_md": false 00:21:43.746 }, 00:21:43.746 "memory_domains": [ 00:21:43.746 { 00:21:43.746 "dma_device_id": "system", 00:21:43.746 "dma_device_type": 1 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.746 "dma_device_type": 2 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "dma_device_id": "system", 00:21:43.746 "dma_device_type": 1 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.746 "dma_device_type": 2 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "dma_device_id": "system", 00:21:43.746 "dma_device_type": 1 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.746 "dma_device_type": 2 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "dma_device_id": "system", 00:21:43.746 "dma_device_type": 1 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.746 "dma_device_type": 2 00:21:43.746 } 00:21:43.746 ], 00:21:43.746 "driver_specific": { 00:21:43.746 "raid": { 00:21:43.746 "uuid": "b3eaf612-57d5-4697-9980-4b09ba2fd7c2", 00:21:43.746 "strip_size_kb": 64, 00:21:43.746 "state": "online", 00:21:43.746 "raid_level": "raid0", 00:21:43.746 "superblock": true, 00:21:43.746 "num_base_bdevs": 4, 00:21:43.746 "num_base_bdevs_discovered": 4, 00:21:43.746 "num_base_bdevs_operational": 4, 00:21:43.746 "base_bdevs_list": [ 00:21:43.746 { 00:21:43.746 "name": "BaseBdev1", 00:21:43.746 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:43.746 "is_configured": true, 00:21:43.746 "data_offset": 2048, 00:21:43.746 "data_size": 63488 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "name": "BaseBdev2", 00:21:43.746 "uuid": "35ca430c-98eb-4f80-88be-a6d079aa6e06", 00:21:43.746 "is_configured": true, 00:21:43.746 "data_offset": 2048, 00:21:43.746 "data_size": 63488 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "name": "BaseBdev3", 00:21:43.746 "uuid": "aaddd156-1440-4b77-8cc8-516bdae0f9b3", 00:21:43.746 "is_configured": true, 00:21:43.746 "data_offset": 2048, 00:21:43.746 "data_size": 63488 00:21:43.746 }, 00:21:43.746 { 00:21:43.746 "name": "BaseBdev4", 00:21:43.746 "uuid": "a04d5c29-488f-4d46-9c4d-0bfe562ddbdc", 00:21:43.746 "is_configured": true, 00:21:43.746 "data_offset": 2048, 00:21:43.746 "data_size": 63488 00:21:43.746 } 00:21:43.746 ] 00:21:43.746 } 00:21:43.746 } 00:21:43.746 }' 00:21:43.746 02:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.746 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:43.746 BaseBdev2 00:21:43.746 BaseBdev3 00:21:43.746 BaseBdev4' 00:21:43.746 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.746 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.746 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:44.006 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:44.006 "name": "BaseBdev1", 00:21:44.006 "aliases": [ 00:21:44.006 "3e053c46-5fca-4d00-a78d-742795dc9d6c" 00:21:44.006 ], 00:21:44.006 "product_name": "Malloc disk", 00:21:44.006 "block_size": 512, 00:21:44.006 "num_blocks": 65536, 00:21:44.006 "uuid": "3e053c46-5fca-4d00-a78d-742795dc9d6c", 00:21:44.006 "assigned_rate_limits": { 00:21:44.006 "rw_ios_per_sec": 0, 00:21:44.006 "rw_mbytes_per_sec": 0, 00:21:44.006 "r_mbytes_per_sec": 0, 00:21:44.006 "w_mbytes_per_sec": 0 00:21:44.006 }, 00:21:44.006 "claimed": true, 00:21:44.006 "claim_type": "exclusive_write", 00:21:44.006 "zoned": false, 00:21:44.006 "supported_io_types": { 00:21:44.006 "read": true, 00:21:44.006 "write": true, 00:21:44.006 "unmap": true, 00:21:44.006 "flush": true, 00:21:44.006 "reset": true, 00:21:44.006 "nvme_admin": false, 00:21:44.006 "nvme_io": false, 00:21:44.006 "nvme_io_md": false, 00:21:44.006 "write_zeroes": true, 00:21:44.006 "zcopy": true, 00:21:44.006 "get_zone_info": false, 00:21:44.006 "zone_management": false, 00:21:44.006 "zone_append": false, 00:21:44.006 "compare": false, 00:21:44.006 "compare_and_write": false, 00:21:44.006 "abort": true, 00:21:44.006 "seek_hole": false, 00:21:44.006 "seek_data": false, 00:21:44.006 "copy": true, 00:21:44.006 "nvme_iov_md": false 00:21:44.006 }, 00:21:44.006 "memory_domains": [ 00:21:44.006 { 00:21:44.006 "dma_device_id": "system", 00:21:44.006 "dma_device_type": 1 00:21:44.006 }, 00:21:44.006 { 00:21:44.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.006 "dma_device_type": 2 00:21:44.006 } 00:21:44.006 ], 00:21:44.006 "driver_specific": {} 00:21:44.006 }' 00:21:44.006 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.006 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.006 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:44.006 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.006 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:44.265 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:44.524 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:44.524 "name": "BaseBdev2", 00:21:44.524 "aliases": [ 00:21:44.524 "35ca430c-98eb-4f80-88be-a6d079aa6e06" 00:21:44.524 ], 00:21:44.524 "product_name": "Malloc disk", 00:21:44.524 "block_size": 512, 00:21:44.524 "num_blocks": 65536, 00:21:44.524 "uuid": "35ca430c-98eb-4f80-88be-a6d079aa6e06", 00:21:44.524 "assigned_rate_limits": { 00:21:44.524 "rw_ios_per_sec": 0, 00:21:44.524 "rw_mbytes_per_sec": 0, 00:21:44.524 "r_mbytes_per_sec": 0, 00:21:44.524 "w_mbytes_per_sec": 0 00:21:44.524 }, 00:21:44.524 "claimed": true, 00:21:44.524 "claim_type": "exclusive_write", 00:21:44.524 "zoned": false, 00:21:44.524 "supported_io_types": { 00:21:44.524 "read": true, 00:21:44.524 "write": true, 00:21:44.524 "unmap": true, 00:21:44.524 "flush": true, 00:21:44.524 "reset": true, 00:21:44.524 "nvme_admin": false, 00:21:44.524 "nvme_io": false, 00:21:44.524 "nvme_io_md": false, 00:21:44.524 "write_zeroes": true, 00:21:44.524 "zcopy": true, 00:21:44.524 "get_zone_info": false, 00:21:44.524 "zone_management": false, 00:21:44.524 "zone_append": false, 00:21:44.524 "compare": false, 00:21:44.524 "compare_and_write": false, 00:21:44.524 "abort": true, 00:21:44.524 "seek_hole": false, 00:21:44.524 "seek_data": false, 00:21:44.524 "copy": true, 00:21:44.524 "nvme_iov_md": false 00:21:44.524 }, 00:21:44.524 "memory_domains": [ 00:21:44.524 { 00:21:44.524 "dma_device_id": "system", 00:21:44.524 "dma_device_type": 1 00:21:44.524 }, 00:21:44.524 { 00:21:44.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.524 "dma_device_type": 2 00:21:44.524 } 00:21:44.524 ], 00:21:44.524 "driver_specific": {} 00:21:44.524 }' 00:21:44.524 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.524 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.783 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:44.783 02:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.783 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.783 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:44.783 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.783 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.783 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:44.783 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.041 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.041 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:45.041 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:45.041 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:45.041 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:45.301 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:45.301 "name": "BaseBdev3", 00:21:45.301 "aliases": [ 00:21:45.301 "aaddd156-1440-4b77-8cc8-516bdae0f9b3" 00:21:45.301 ], 00:21:45.301 "product_name": "Malloc disk", 00:21:45.301 "block_size": 512, 00:21:45.301 "num_blocks": 65536, 00:21:45.301 "uuid": "aaddd156-1440-4b77-8cc8-516bdae0f9b3", 00:21:45.301 "assigned_rate_limits": { 00:21:45.301 "rw_ios_per_sec": 0, 00:21:45.301 "rw_mbytes_per_sec": 0, 00:21:45.301 "r_mbytes_per_sec": 0, 00:21:45.301 "w_mbytes_per_sec": 0 00:21:45.301 }, 00:21:45.301 "claimed": true, 00:21:45.301 "claim_type": "exclusive_write", 00:21:45.301 "zoned": false, 00:21:45.301 "supported_io_types": { 00:21:45.301 "read": true, 00:21:45.301 "write": true, 00:21:45.301 "unmap": true, 00:21:45.301 "flush": true, 00:21:45.301 "reset": true, 00:21:45.301 "nvme_admin": false, 00:21:45.301 "nvme_io": false, 00:21:45.301 "nvme_io_md": false, 00:21:45.301 "write_zeroes": true, 00:21:45.301 "zcopy": true, 00:21:45.301 "get_zone_info": false, 00:21:45.301 "zone_management": false, 00:21:45.301 "zone_append": false, 00:21:45.301 "compare": false, 00:21:45.301 "compare_and_write": false, 00:21:45.301 "abort": true, 00:21:45.301 "seek_hole": false, 00:21:45.301 "seek_data": false, 00:21:45.301 "copy": true, 00:21:45.301 "nvme_iov_md": false 00:21:45.301 }, 00:21:45.301 "memory_domains": [ 00:21:45.301 { 00:21:45.301 "dma_device_id": "system", 00:21:45.301 "dma_device_type": 1 00:21:45.301 }, 00:21:45.301 { 00:21:45.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.301 "dma_device_type": 2 00:21:45.301 } 00:21:45.301 ], 00:21:45.301 "driver_specific": {} 00:21:45.301 }' 00:21:45.301 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:45.301 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:45.301 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:45.301 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:45.560 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:45.560 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:45.560 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:45.560 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:45.560 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:45.561 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.561 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.561 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:45.561 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:45.561 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:45.561 02:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:45.819 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:45.819 "name": "BaseBdev4", 00:21:45.819 "aliases": [ 00:21:45.819 "a04d5c29-488f-4d46-9c4d-0bfe562ddbdc" 00:21:45.819 ], 00:21:45.819 "product_name": "Malloc disk", 00:21:45.819 "block_size": 512, 00:21:45.819 "num_blocks": 65536, 00:21:45.819 "uuid": "a04d5c29-488f-4d46-9c4d-0bfe562ddbdc", 00:21:45.819 "assigned_rate_limits": { 00:21:45.819 "rw_ios_per_sec": 0, 00:21:45.820 "rw_mbytes_per_sec": 0, 00:21:45.820 "r_mbytes_per_sec": 0, 00:21:45.820 "w_mbytes_per_sec": 0 00:21:45.820 }, 00:21:45.820 "claimed": true, 00:21:45.820 "claim_type": "exclusive_write", 00:21:45.820 "zoned": false, 00:21:45.820 "supported_io_types": { 00:21:45.820 "read": true, 00:21:45.820 "write": true, 00:21:45.820 "unmap": true, 00:21:45.820 "flush": true, 00:21:45.820 "reset": true, 00:21:45.820 "nvme_admin": false, 00:21:45.820 "nvme_io": false, 00:21:45.820 "nvme_io_md": false, 00:21:45.820 "write_zeroes": true, 00:21:45.820 "zcopy": true, 00:21:45.820 "get_zone_info": false, 00:21:45.820 "zone_management": false, 00:21:45.820 "zone_append": false, 00:21:45.820 "compare": false, 00:21:45.820 "compare_and_write": false, 00:21:45.820 "abort": true, 00:21:45.820 "seek_hole": false, 00:21:45.820 "seek_data": false, 00:21:45.820 "copy": true, 00:21:45.820 "nvme_iov_md": false 00:21:45.820 }, 00:21:45.820 "memory_domains": [ 00:21:45.820 { 00:21:45.820 "dma_device_id": "system", 00:21:45.820 "dma_device_type": 1 00:21:45.820 }, 00:21:45.820 { 00:21:45.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.820 "dma_device_type": 2 00:21:45.820 } 00:21:45.820 ], 00:21:45.820 "driver_specific": {} 00:21:45.820 }' 00:21:45.820 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:46.079 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.337 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.337 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:46.337 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:46.596 [2024-07-11 02:27:36.801871] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:46.596 [2024-07-11 02:27:36.801899] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.596 [2024-07-11 02:27:36.801946] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.596 02:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.855 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.855 "name": "Existed_Raid", 00:21:46.855 "uuid": "b3eaf612-57d5-4697-9980-4b09ba2fd7c2", 00:21:46.855 "strip_size_kb": 64, 00:21:46.855 "state": "offline", 00:21:46.855 "raid_level": "raid0", 00:21:46.855 "superblock": true, 00:21:46.855 "num_base_bdevs": 4, 00:21:46.855 "num_base_bdevs_discovered": 3, 00:21:46.855 "num_base_bdevs_operational": 3, 00:21:46.855 "base_bdevs_list": [ 00:21:46.855 { 00:21:46.855 "name": null, 00:21:46.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.855 "is_configured": false, 00:21:46.855 "data_offset": 2048, 00:21:46.855 "data_size": 63488 00:21:46.855 }, 00:21:46.855 { 00:21:46.855 "name": "BaseBdev2", 00:21:46.855 "uuid": "35ca430c-98eb-4f80-88be-a6d079aa6e06", 00:21:46.855 "is_configured": true, 00:21:46.855 "data_offset": 2048, 00:21:46.855 "data_size": 63488 00:21:46.855 }, 00:21:46.855 { 00:21:46.855 "name": "BaseBdev3", 00:21:46.855 "uuid": "aaddd156-1440-4b77-8cc8-516bdae0f9b3", 00:21:46.855 "is_configured": true, 00:21:46.855 "data_offset": 2048, 00:21:46.855 "data_size": 63488 00:21:46.855 }, 00:21:46.855 { 00:21:46.855 "name": "BaseBdev4", 00:21:46.855 "uuid": "a04d5c29-488f-4d46-9c4d-0bfe562ddbdc", 00:21:46.855 "is_configured": true, 00:21:46.855 "data_offset": 2048, 00:21:46.855 "data_size": 63488 00:21:46.855 } 00:21:46.855 ] 00:21:46.855 }' 00:21:46.855 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.855 02:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.424 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:47.424 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:47.424 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.424 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:47.683 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:47.683 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:47.683 02:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:47.683 [2024-07-11 02:27:38.095154] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:47.941 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:47.941 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:47.941 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:47.941 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.941 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:47.941 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:47.941 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:48.200 [2024-07-11 02:27:38.546898] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:48.200 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:48.200 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:48.200 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.200 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:48.460 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:48.460 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:48.460 02:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:48.718 [2024-07-11 02:27:39.060500] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:48.718 [2024-07-11 02:27:39.060542] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16f4d70 name Existed_Raid, state offline 00:21:48.718 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:48.718 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:48.718 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.718 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:48.993 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:48.993 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:48.993 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:48.993 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:48.993 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:48.993 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:49.257 BaseBdev2 00:21:49.257 02:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:49.257 02:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:49.257 02:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:49.257 02:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:49.257 02:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:49.257 02:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:49.257 02:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.514 02:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:49.772 [ 00:21:49.772 { 00:21:49.772 "name": "BaseBdev2", 00:21:49.772 "aliases": [ 00:21:49.772 "978040f4-a680-4aa6-907f-b385a948850a" 00:21:49.772 ], 00:21:49.772 "product_name": "Malloc disk", 00:21:49.772 "block_size": 512, 00:21:49.772 "num_blocks": 65536, 00:21:49.772 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:21:49.772 "assigned_rate_limits": { 00:21:49.772 "rw_ios_per_sec": 0, 00:21:49.772 "rw_mbytes_per_sec": 0, 00:21:49.772 "r_mbytes_per_sec": 0, 00:21:49.772 "w_mbytes_per_sec": 0 00:21:49.772 }, 00:21:49.772 "claimed": false, 00:21:49.772 "zoned": false, 00:21:49.772 "supported_io_types": { 00:21:49.772 "read": true, 00:21:49.772 "write": true, 00:21:49.772 "unmap": true, 00:21:49.772 "flush": true, 00:21:49.772 "reset": true, 00:21:49.772 "nvme_admin": false, 00:21:49.772 "nvme_io": false, 00:21:49.772 "nvme_io_md": false, 00:21:49.772 "write_zeroes": true, 00:21:49.772 "zcopy": true, 00:21:49.772 "get_zone_info": false, 00:21:49.772 "zone_management": false, 00:21:49.772 "zone_append": false, 00:21:49.772 "compare": false, 00:21:49.772 "compare_and_write": false, 00:21:49.772 "abort": true, 00:21:49.772 "seek_hole": false, 00:21:49.772 "seek_data": false, 00:21:49.772 "copy": true, 00:21:49.772 "nvme_iov_md": false 00:21:49.772 }, 00:21:49.772 "memory_domains": [ 00:21:49.772 { 00:21:49.772 "dma_device_id": "system", 00:21:49.772 "dma_device_type": 1 00:21:49.772 }, 00:21:49.772 { 00:21:49.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.772 "dma_device_type": 2 00:21:49.772 } 00:21:49.772 ], 00:21:49.772 "driver_specific": {} 00:21:49.772 } 00:21:49.772 ] 00:21:49.772 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:49.772 02:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:49.772 02:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:49.772 02:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:50.030 BaseBdev3 00:21:50.030 02:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:50.030 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:50.030 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:50.030 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:50.030 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:50.030 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:50.030 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:50.288 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:50.546 [ 00:21:50.546 { 00:21:50.546 "name": "BaseBdev3", 00:21:50.546 "aliases": [ 00:21:50.546 "ee5d2164-1282-4e03-87cb-281151160e0d" 00:21:50.546 ], 00:21:50.546 "product_name": "Malloc disk", 00:21:50.546 "block_size": 512, 00:21:50.546 "num_blocks": 65536, 00:21:50.546 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:21:50.546 "assigned_rate_limits": { 00:21:50.546 "rw_ios_per_sec": 0, 00:21:50.546 "rw_mbytes_per_sec": 0, 00:21:50.546 "r_mbytes_per_sec": 0, 00:21:50.546 "w_mbytes_per_sec": 0 00:21:50.546 }, 00:21:50.546 "claimed": false, 00:21:50.546 "zoned": false, 00:21:50.546 "supported_io_types": { 00:21:50.546 "read": true, 00:21:50.546 "write": true, 00:21:50.546 "unmap": true, 00:21:50.546 "flush": true, 00:21:50.546 "reset": true, 00:21:50.546 "nvme_admin": false, 00:21:50.546 "nvme_io": false, 00:21:50.546 "nvme_io_md": false, 00:21:50.546 "write_zeroes": true, 00:21:50.546 "zcopy": true, 00:21:50.546 "get_zone_info": false, 00:21:50.546 "zone_management": false, 00:21:50.546 "zone_append": false, 00:21:50.546 "compare": false, 00:21:50.546 "compare_and_write": false, 00:21:50.546 "abort": true, 00:21:50.546 "seek_hole": false, 00:21:50.546 "seek_data": false, 00:21:50.546 "copy": true, 00:21:50.546 "nvme_iov_md": false 00:21:50.546 }, 00:21:50.546 "memory_domains": [ 00:21:50.546 { 00:21:50.546 "dma_device_id": "system", 00:21:50.546 "dma_device_type": 1 00:21:50.546 }, 00:21:50.546 { 00:21:50.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.546 "dma_device_type": 2 00:21:50.546 } 00:21:50.546 ], 00:21:50.546 "driver_specific": {} 00:21:50.546 } 00:21:50.546 ] 00:21:50.546 02:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:50.546 02:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:50.546 02:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:50.546 02:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:50.803 BaseBdev4 00:21:50.803 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:50.803 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:50.803 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:50.803 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:50.803 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:50.803 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:50.803 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:51.061 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:51.320 [ 00:21:51.320 { 00:21:51.320 "name": "BaseBdev4", 00:21:51.320 "aliases": [ 00:21:51.320 "f920209a-1bcf-4c07-bb3b-34cfabf76579" 00:21:51.320 ], 00:21:51.320 "product_name": "Malloc disk", 00:21:51.320 "block_size": 512, 00:21:51.320 "num_blocks": 65536, 00:21:51.320 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:21:51.320 "assigned_rate_limits": { 00:21:51.320 "rw_ios_per_sec": 0, 00:21:51.320 "rw_mbytes_per_sec": 0, 00:21:51.320 "r_mbytes_per_sec": 0, 00:21:51.320 "w_mbytes_per_sec": 0 00:21:51.320 }, 00:21:51.320 "claimed": false, 00:21:51.320 "zoned": false, 00:21:51.320 "supported_io_types": { 00:21:51.320 "read": true, 00:21:51.320 "write": true, 00:21:51.320 "unmap": true, 00:21:51.320 "flush": true, 00:21:51.320 "reset": true, 00:21:51.320 "nvme_admin": false, 00:21:51.320 "nvme_io": false, 00:21:51.320 "nvme_io_md": false, 00:21:51.320 "write_zeroes": true, 00:21:51.320 "zcopy": true, 00:21:51.320 "get_zone_info": false, 00:21:51.320 "zone_management": false, 00:21:51.320 "zone_append": false, 00:21:51.320 "compare": false, 00:21:51.320 "compare_and_write": false, 00:21:51.320 "abort": true, 00:21:51.320 "seek_hole": false, 00:21:51.320 "seek_data": false, 00:21:51.320 "copy": true, 00:21:51.320 "nvme_iov_md": false 00:21:51.320 }, 00:21:51.320 "memory_domains": [ 00:21:51.320 { 00:21:51.320 "dma_device_id": "system", 00:21:51.320 "dma_device_type": 1 00:21:51.320 }, 00:21:51.320 { 00:21:51.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.320 "dma_device_type": 2 00:21:51.320 } 00:21:51.320 ], 00:21:51.320 "driver_specific": {} 00:21:51.320 } 00:21:51.320 ] 00:21:51.320 02:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:51.320 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:51.320 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:51.320 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:51.579 [2024-07-11 02:27:41.755508] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:51.579 [2024-07-11 02:27:41.755546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:51.579 [2024-07-11 02:27:41.755565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.579 [2024-07-11 02:27:41.756884] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:51.579 [2024-07-11 02:27:41.756925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.579 02:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.838 02:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.838 "name": "Existed_Raid", 00:21:51.838 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:21:51.838 "strip_size_kb": 64, 00:21:51.838 "state": "configuring", 00:21:51.838 "raid_level": "raid0", 00:21:51.838 "superblock": true, 00:21:51.838 "num_base_bdevs": 4, 00:21:51.838 "num_base_bdevs_discovered": 3, 00:21:51.838 "num_base_bdevs_operational": 4, 00:21:51.838 "base_bdevs_list": [ 00:21:51.838 { 00:21:51.838 "name": "BaseBdev1", 00:21:51.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.838 "is_configured": false, 00:21:51.838 "data_offset": 0, 00:21:51.838 "data_size": 0 00:21:51.838 }, 00:21:51.838 { 00:21:51.838 "name": "BaseBdev2", 00:21:51.838 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:21:51.838 "is_configured": true, 00:21:51.838 "data_offset": 2048, 00:21:51.838 "data_size": 63488 00:21:51.838 }, 00:21:51.838 { 00:21:51.838 "name": "BaseBdev3", 00:21:51.838 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:21:51.838 "is_configured": true, 00:21:51.838 "data_offset": 2048, 00:21:51.838 "data_size": 63488 00:21:51.839 }, 00:21:51.839 { 00:21:51.839 "name": "BaseBdev4", 00:21:51.839 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:21:51.839 "is_configured": true, 00:21:51.839 "data_offset": 2048, 00:21:51.839 "data_size": 63488 00:21:51.839 } 00:21:51.839 ] 00:21:51.839 }' 00:21:51.839 02:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.839 02:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.773 02:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:52.773 [2024-07-11 02:27:43.111074] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.773 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.340 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.340 "name": "Existed_Raid", 00:21:53.340 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:21:53.340 "strip_size_kb": 64, 00:21:53.340 "state": "configuring", 00:21:53.340 "raid_level": "raid0", 00:21:53.340 "superblock": true, 00:21:53.340 "num_base_bdevs": 4, 00:21:53.340 "num_base_bdevs_discovered": 2, 00:21:53.340 "num_base_bdevs_operational": 4, 00:21:53.340 "base_bdevs_list": [ 00:21:53.340 { 00:21:53.340 "name": "BaseBdev1", 00:21:53.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.340 "is_configured": false, 00:21:53.340 "data_offset": 0, 00:21:53.340 "data_size": 0 00:21:53.340 }, 00:21:53.340 { 00:21:53.340 "name": null, 00:21:53.340 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:21:53.340 "is_configured": false, 00:21:53.340 "data_offset": 2048, 00:21:53.340 "data_size": 63488 00:21:53.340 }, 00:21:53.340 { 00:21:53.340 "name": "BaseBdev3", 00:21:53.340 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:21:53.340 "is_configured": true, 00:21:53.340 "data_offset": 2048, 00:21:53.340 "data_size": 63488 00:21:53.340 }, 00:21:53.340 { 00:21:53.340 "name": "BaseBdev4", 00:21:53.340 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:21:53.340 "is_configured": true, 00:21:53.340 "data_offset": 2048, 00:21:53.340 "data_size": 63488 00:21:53.340 } 00:21:53.340 ] 00:21:53.340 }' 00:21:53.340 02:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.340 02:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.905 02:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.905 02:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:54.163 02:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:54.163 02:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:54.421 [2024-07-11 02:27:44.722735] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.421 BaseBdev1 00:21:54.421 02:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:54.421 02:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:54.421 02:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:54.421 02:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:54.422 02:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:54.422 02:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:54.422 02:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:54.680 02:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:54.680 [ 00:21:54.680 { 00:21:54.680 "name": "BaseBdev1", 00:21:54.680 "aliases": [ 00:21:54.680 "ebc95089-f141-412d-97ae-a6d6b636f3fb" 00:21:54.680 ], 00:21:54.680 "product_name": "Malloc disk", 00:21:54.680 "block_size": 512, 00:21:54.680 "num_blocks": 65536, 00:21:54.680 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:21:54.680 "assigned_rate_limits": { 00:21:54.680 "rw_ios_per_sec": 0, 00:21:54.680 "rw_mbytes_per_sec": 0, 00:21:54.680 "r_mbytes_per_sec": 0, 00:21:54.680 "w_mbytes_per_sec": 0 00:21:54.680 }, 00:21:54.680 "claimed": true, 00:21:54.680 "claim_type": "exclusive_write", 00:21:54.680 "zoned": false, 00:21:54.680 "supported_io_types": { 00:21:54.680 "read": true, 00:21:54.680 "write": true, 00:21:54.680 "unmap": true, 00:21:54.680 "flush": true, 00:21:54.680 "reset": true, 00:21:54.680 "nvme_admin": false, 00:21:54.680 "nvme_io": false, 00:21:54.680 "nvme_io_md": false, 00:21:54.680 "write_zeroes": true, 00:21:54.680 "zcopy": true, 00:21:54.680 "get_zone_info": false, 00:21:54.680 "zone_management": false, 00:21:54.680 "zone_append": false, 00:21:54.680 "compare": false, 00:21:54.680 "compare_and_write": false, 00:21:54.680 "abort": true, 00:21:54.680 "seek_hole": false, 00:21:54.680 "seek_data": false, 00:21:54.680 "copy": true, 00:21:54.680 "nvme_iov_md": false 00:21:54.680 }, 00:21:54.680 "memory_domains": [ 00:21:54.680 { 00:21:54.680 "dma_device_id": "system", 00:21:54.680 "dma_device_type": 1 00:21:54.680 }, 00:21:54.680 { 00:21:54.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.680 "dma_device_type": 2 00:21:54.680 } 00:21:54.680 ], 00:21:54.680 "driver_specific": {} 00:21:54.680 } 00:21:54.680 ] 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.939 "name": "Existed_Raid", 00:21:54.939 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:21:54.939 "strip_size_kb": 64, 00:21:54.939 "state": "configuring", 00:21:54.939 "raid_level": "raid0", 00:21:54.939 "superblock": true, 00:21:54.939 "num_base_bdevs": 4, 00:21:54.939 "num_base_bdevs_discovered": 3, 00:21:54.939 "num_base_bdevs_operational": 4, 00:21:54.939 "base_bdevs_list": [ 00:21:54.939 { 00:21:54.939 "name": "BaseBdev1", 00:21:54.939 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:21:54.939 "is_configured": true, 00:21:54.939 "data_offset": 2048, 00:21:54.939 "data_size": 63488 00:21:54.939 }, 00:21:54.939 { 00:21:54.939 "name": null, 00:21:54.939 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:21:54.939 "is_configured": false, 00:21:54.939 "data_offset": 2048, 00:21:54.939 "data_size": 63488 00:21:54.939 }, 00:21:54.939 { 00:21:54.939 "name": "BaseBdev3", 00:21:54.939 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:21:54.939 "is_configured": true, 00:21:54.939 "data_offset": 2048, 00:21:54.939 "data_size": 63488 00:21:54.939 }, 00:21:54.939 { 00:21:54.939 "name": "BaseBdev4", 00:21:54.939 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:21:54.939 "is_configured": true, 00:21:54.939 "data_offset": 2048, 00:21:54.939 "data_size": 63488 00:21:54.939 } 00:21:54.939 ] 00:21:54.939 }' 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.939 02:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:55.506 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.506 02:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:55.764 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:55.764 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:56.023 [2024-07-11 02:27:46.294934] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.023 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.282 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.282 "name": "Existed_Raid", 00:21:56.282 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:21:56.282 "strip_size_kb": 64, 00:21:56.282 "state": "configuring", 00:21:56.282 "raid_level": "raid0", 00:21:56.282 "superblock": true, 00:21:56.282 "num_base_bdevs": 4, 00:21:56.282 "num_base_bdevs_discovered": 2, 00:21:56.282 "num_base_bdevs_operational": 4, 00:21:56.282 "base_bdevs_list": [ 00:21:56.282 { 00:21:56.282 "name": "BaseBdev1", 00:21:56.282 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:21:56.282 "is_configured": true, 00:21:56.282 "data_offset": 2048, 00:21:56.282 "data_size": 63488 00:21:56.282 }, 00:21:56.282 { 00:21:56.282 "name": null, 00:21:56.283 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:21:56.283 "is_configured": false, 00:21:56.283 "data_offset": 2048, 00:21:56.283 "data_size": 63488 00:21:56.283 }, 00:21:56.283 { 00:21:56.283 "name": null, 00:21:56.283 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:21:56.283 "is_configured": false, 00:21:56.283 "data_offset": 2048, 00:21:56.283 "data_size": 63488 00:21:56.283 }, 00:21:56.283 { 00:21:56.283 "name": "BaseBdev4", 00:21:56.283 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:21:56.283 "is_configured": true, 00:21:56.283 "data_offset": 2048, 00:21:56.283 "data_size": 63488 00:21:56.283 } 00:21:56.283 ] 00:21:56.283 }' 00:21:56.283 02:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.283 02:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.849 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.850 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:57.108 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:57.108 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:57.367 [2024-07-11 02:27:47.726746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.367 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.626 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.626 "name": "Existed_Raid", 00:21:57.626 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:21:57.626 "strip_size_kb": 64, 00:21:57.626 "state": "configuring", 00:21:57.626 "raid_level": "raid0", 00:21:57.626 "superblock": true, 00:21:57.626 "num_base_bdevs": 4, 00:21:57.626 "num_base_bdevs_discovered": 3, 00:21:57.626 "num_base_bdevs_operational": 4, 00:21:57.626 "base_bdevs_list": [ 00:21:57.626 { 00:21:57.626 "name": "BaseBdev1", 00:21:57.626 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:21:57.626 "is_configured": true, 00:21:57.626 "data_offset": 2048, 00:21:57.626 "data_size": 63488 00:21:57.626 }, 00:21:57.626 { 00:21:57.626 "name": null, 00:21:57.626 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:21:57.626 "is_configured": false, 00:21:57.626 "data_offset": 2048, 00:21:57.626 "data_size": 63488 00:21:57.626 }, 00:21:57.626 { 00:21:57.626 "name": "BaseBdev3", 00:21:57.626 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:21:57.626 "is_configured": true, 00:21:57.626 "data_offset": 2048, 00:21:57.626 "data_size": 63488 00:21:57.626 }, 00:21:57.626 { 00:21:57.626 "name": "BaseBdev4", 00:21:57.626 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:21:57.626 "is_configured": true, 00:21:57.626 "data_offset": 2048, 00:21:57.626 "data_size": 63488 00:21:57.626 } 00:21:57.626 ] 00:21:57.626 }' 00:21:57.626 02:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.626 02:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.193 02:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.193 02:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:58.451 02:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:58.451 02:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:58.710 [2024-07-11 02:27:48.998147] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.710 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.969 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.969 "name": "Existed_Raid", 00:21:58.969 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:21:58.969 "strip_size_kb": 64, 00:21:58.969 "state": "configuring", 00:21:58.969 "raid_level": "raid0", 00:21:58.969 "superblock": true, 00:21:58.969 "num_base_bdevs": 4, 00:21:58.969 "num_base_bdevs_discovered": 2, 00:21:58.969 "num_base_bdevs_operational": 4, 00:21:58.969 "base_bdevs_list": [ 00:21:58.969 { 00:21:58.969 "name": null, 00:21:58.969 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:21:58.969 "is_configured": false, 00:21:58.969 "data_offset": 2048, 00:21:58.969 "data_size": 63488 00:21:58.969 }, 00:21:58.969 { 00:21:58.969 "name": null, 00:21:58.969 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:21:58.969 "is_configured": false, 00:21:58.969 "data_offset": 2048, 00:21:58.969 "data_size": 63488 00:21:58.969 }, 00:21:58.969 { 00:21:58.969 "name": "BaseBdev3", 00:21:58.969 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:21:58.969 "is_configured": true, 00:21:58.969 "data_offset": 2048, 00:21:58.969 "data_size": 63488 00:21:58.969 }, 00:21:58.970 { 00:21:58.970 "name": "BaseBdev4", 00:21:58.970 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:21:58.970 "is_configured": true, 00:21:58.970 "data_offset": 2048, 00:21:58.970 "data_size": 63488 00:21:58.970 } 00:21:58.970 ] 00:21:58.970 }' 00:21:58.970 02:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.970 02:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:59.990 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.990 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:59.990 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:59.990 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:00.248 [2024-07-11 02:27:50.556744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.248 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.507 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.507 "name": "Existed_Raid", 00:22:00.507 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:22:00.507 "strip_size_kb": 64, 00:22:00.507 "state": "configuring", 00:22:00.507 "raid_level": "raid0", 00:22:00.507 "superblock": true, 00:22:00.507 "num_base_bdevs": 4, 00:22:00.507 "num_base_bdevs_discovered": 3, 00:22:00.507 "num_base_bdevs_operational": 4, 00:22:00.507 "base_bdevs_list": [ 00:22:00.507 { 00:22:00.507 "name": null, 00:22:00.507 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:22:00.507 "is_configured": false, 00:22:00.507 "data_offset": 2048, 00:22:00.507 "data_size": 63488 00:22:00.507 }, 00:22:00.507 { 00:22:00.507 "name": "BaseBdev2", 00:22:00.507 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:22:00.507 "is_configured": true, 00:22:00.507 "data_offset": 2048, 00:22:00.507 "data_size": 63488 00:22:00.507 }, 00:22:00.507 { 00:22:00.507 "name": "BaseBdev3", 00:22:00.507 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:22:00.507 "is_configured": true, 00:22:00.507 "data_offset": 2048, 00:22:00.507 "data_size": 63488 00:22:00.507 }, 00:22:00.507 { 00:22:00.507 "name": "BaseBdev4", 00:22:00.507 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:22:00.507 "is_configured": true, 00:22:00.507 "data_offset": 2048, 00:22:00.507 "data_size": 63488 00:22:00.507 } 00:22:00.507 ] 00:22:00.507 }' 00:22:00.507 02:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.507 02:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.444 02:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.444 02:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:01.703 02:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:01.703 02:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.703 02:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:01.963 02:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ebc95089-f141-412d-97ae-a6d6b636f3fb 00:22:02.223 [2024-07-11 02:27:52.461105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:02.223 [2024-07-11 02:27:52.461256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1546ba0 00:22:02.223 [2024-07-11 02:27:52.461269] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:02.223 [2024-07-11 02:27:52.461446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x153ae80 00:22:02.223 [2024-07-11 02:27:52.461563] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1546ba0 00:22:02.223 [2024-07-11 02:27:52.461573] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1546ba0 00:22:02.223 [2024-07-11 02:27:52.461660] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.223 NewBaseBdev 00:22:02.223 02:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:02.223 02:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:02.223 02:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:02.223 02:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:02.223 02:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:02.223 02:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:02.223 02:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:02.793 02:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:03.053 [ 00:22:03.053 { 00:22:03.053 "name": "NewBaseBdev", 00:22:03.053 "aliases": [ 00:22:03.053 "ebc95089-f141-412d-97ae-a6d6b636f3fb" 00:22:03.053 ], 00:22:03.053 "product_name": "Malloc disk", 00:22:03.053 "block_size": 512, 00:22:03.053 "num_blocks": 65536, 00:22:03.053 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:22:03.053 "assigned_rate_limits": { 00:22:03.053 "rw_ios_per_sec": 0, 00:22:03.053 "rw_mbytes_per_sec": 0, 00:22:03.053 "r_mbytes_per_sec": 0, 00:22:03.053 "w_mbytes_per_sec": 0 00:22:03.053 }, 00:22:03.053 "claimed": true, 00:22:03.053 "claim_type": "exclusive_write", 00:22:03.053 "zoned": false, 00:22:03.053 "supported_io_types": { 00:22:03.053 "read": true, 00:22:03.053 "write": true, 00:22:03.053 "unmap": true, 00:22:03.053 "flush": true, 00:22:03.053 "reset": true, 00:22:03.053 "nvme_admin": false, 00:22:03.053 "nvme_io": false, 00:22:03.053 "nvme_io_md": false, 00:22:03.053 "write_zeroes": true, 00:22:03.053 "zcopy": true, 00:22:03.053 "get_zone_info": false, 00:22:03.053 "zone_management": false, 00:22:03.053 "zone_append": false, 00:22:03.053 "compare": false, 00:22:03.053 "compare_and_write": false, 00:22:03.053 "abort": true, 00:22:03.053 "seek_hole": false, 00:22:03.053 "seek_data": false, 00:22:03.053 "copy": true, 00:22:03.053 "nvme_iov_md": false 00:22:03.053 }, 00:22:03.053 "memory_domains": [ 00:22:03.053 { 00:22:03.053 "dma_device_id": "system", 00:22:03.053 "dma_device_type": 1 00:22:03.053 }, 00:22:03.053 { 00:22:03.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.053 "dma_device_type": 2 00:22:03.053 } 00:22:03.053 ], 00:22:03.053 "driver_specific": {} 00:22:03.053 } 00:22:03.053 ] 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.053 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.620 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.620 "name": "Existed_Raid", 00:22:03.620 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:22:03.620 "strip_size_kb": 64, 00:22:03.620 "state": "online", 00:22:03.620 "raid_level": "raid0", 00:22:03.620 "superblock": true, 00:22:03.620 "num_base_bdevs": 4, 00:22:03.620 "num_base_bdevs_discovered": 4, 00:22:03.620 "num_base_bdevs_operational": 4, 00:22:03.620 "base_bdevs_list": [ 00:22:03.620 { 00:22:03.620 "name": "NewBaseBdev", 00:22:03.620 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:22:03.620 "is_configured": true, 00:22:03.620 "data_offset": 2048, 00:22:03.620 "data_size": 63488 00:22:03.620 }, 00:22:03.620 { 00:22:03.620 "name": "BaseBdev2", 00:22:03.620 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:22:03.620 "is_configured": true, 00:22:03.620 "data_offset": 2048, 00:22:03.620 "data_size": 63488 00:22:03.620 }, 00:22:03.620 { 00:22:03.620 "name": "BaseBdev3", 00:22:03.620 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:22:03.620 "is_configured": true, 00:22:03.620 "data_offset": 2048, 00:22:03.620 "data_size": 63488 00:22:03.620 }, 00:22:03.620 { 00:22:03.620 "name": "BaseBdev4", 00:22:03.620 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:22:03.620 "is_configured": true, 00:22:03.620 "data_offset": 2048, 00:22:03.620 "data_size": 63488 00:22:03.620 } 00:22:03.620 ] 00:22:03.620 }' 00:22:03.620 02:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.620 02:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:04.558 [2024-07-11 02:27:54.843899] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:04.558 "name": "Existed_Raid", 00:22:04.558 "aliases": [ 00:22:04.558 "b510704c-77b8-4771-a842-c093829512f0" 00:22:04.558 ], 00:22:04.558 "product_name": "Raid Volume", 00:22:04.558 "block_size": 512, 00:22:04.558 "num_blocks": 253952, 00:22:04.558 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:22:04.558 "assigned_rate_limits": { 00:22:04.558 "rw_ios_per_sec": 0, 00:22:04.558 "rw_mbytes_per_sec": 0, 00:22:04.558 "r_mbytes_per_sec": 0, 00:22:04.558 "w_mbytes_per_sec": 0 00:22:04.558 }, 00:22:04.558 "claimed": false, 00:22:04.558 "zoned": false, 00:22:04.558 "supported_io_types": { 00:22:04.558 "read": true, 00:22:04.558 "write": true, 00:22:04.558 "unmap": true, 00:22:04.558 "flush": true, 00:22:04.558 "reset": true, 00:22:04.558 "nvme_admin": false, 00:22:04.558 "nvme_io": false, 00:22:04.558 "nvme_io_md": false, 00:22:04.558 "write_zeroes": true, 00:22:04.558 "zcopy": false, 00:22:04.558 "get_zone_info": false, 00:22:04.558 "zone_management": false, 00:22:04.558 "zone_append": false, 00:22:04.558 "compare": false, 00:22:04.558 "compare_and_write": false, 00:22:04.558 "abort": false, 00:22:04.558 "seek_hole": false, 00:22:04.558 "seek_data": false, 00:22:04.558 "copy": false, 00:22:04.558 "nvme_iov_md": false 00:22:04.558 }, 00:22:04.558 "memory_domains": [ 00:22:04.558 { 00:22:04.558 "dma_device_id": "system", 00:22:04.558 "dma_device_type": 1 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.558 "dma_device_type": 2 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "dma_device_id": "system", 00:22:04.558 "dma_device_type": 1 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.558 "dma_device_type": 2 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "dma_device_id": "system", 00:22:04.558 "dma_device_type": 1 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.558 "dma_device_type": 2 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "dma_device_id": "system", 00:22:04.558 "dma_device_type": 1 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.558 "dma_device_type": 2 00:22:04.558 } 00:22:04.558 ], 00:22:04.558 "driver_specific": { 00:22:04.558 "raid": { 00:22:04.558 "uuid": "b510704c-77b8-4771-a842-c093829512f0", 00:22:04.558 "strip_size_kb": 64, 00:22:04.558 "state": "online", 00:22:04.558 "raid_level": "raid0", 00:22:04.558 "superblock": true, 00:22:04.558 "num_base_bdevs": 4, 00:22:04.558 "num_base_bdevs_discovered": 4, 00:22:04.558 "num_base_bdevs_operational": 4, 00:22:04.558 "base_bdevs_list": [ 00:22:04.558 { 00:22:04.558 "name": "NewBaseBdev", 00:22:04.558 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:22:04.558 "is_configured": true, 00:22:04.558 "data_offset": 2048, 00:22:04.558 "data_size": 63488 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "name": "BaseBdev2", 00:22:04.558 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:22:04.558 "is_configured": true, 00:22:04.558 "data_offset": 2048, 00:22:04.558 "data_size": 63488 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "name": "BaseBdev3", 00:22:04.558 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:22:04.558 "is_configured": true, 00:22:04.558 "data_offset": 2048, 00:22:04.558 "data_size": 63488 00:22:04.558 }, 00:22:04.558 { 00:22:04.558 "name": "BaseBdev4", 00:22:04.558 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:22:04.558 "is_configured": true, 00:22:04.558 "data_offset": 2048, 00:22:04.558 "data_size": 63488 00:22:04.558 } 00:22:04.558 ] 00:22:04.558 } 00:22:04.558 } 00:22:04.558 }' 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:04.558 BaseBdev2 00:22:04.558 BaseBdev3 00:22:04.558 BaseBdev4' 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:04.558 02:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.127 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.127 "name": "NewBaseBdev", 00:22:05.127 "aliases": [ 00:22:05.127 "ebc95089-f141-412d-97ae-a6d6b636f3fb" 00:22:05.127 ], 00:22:05.127 "product_name": "Malloc disk", 00:22:05.127 "block_size": 512, 00:22:05.127 "num_blocks": 65536, 00:22:05.127 "uuid": "ebc95089-f141-412d-97ae-a6d6b636f3fb", 00:22:05.127 "assigned_rate_limits": { 00:22:05.127 "rw_ios_per_sec": 0, 00:22:05.127 "rw_mbytes_per_sec": 0, 00:22:05.127 "r_mbytes_per_sec": 0, 00:22:05.127 "w_mbytes_per_sec": 0 00:22:05.127 }, 00:22:05.127 "claimed": true, 00:22:05.127 "claim_type": "exclusive_write", 00:22:05.127 "zoned": false, 00:22:05.127 "supported_io_types": { 00:22:05.127 "read": true, 00:22:05.127 "write": true, 00:22:05.127 "unmap": true, 00:22:05.127 "flush": true, 00:22:05.127 "reset": true, 00:22:05.127 "nvme_admin": false, 00:22:05.127 "nvme_io": false, 00:22:05.127 "nvme_io_md": false, 00:22:05.127 "write_zeroes": true, 00:22:05.127 "zcopy": true, 00:22:05.127 "get_zone_info": false, 00:22:05.127 "zone_management": false, 00:22:05.127 "zone_append": false, 00:22:05.127 "compare": false, 00:22:05.127 "compare_and_write": false, 00:22:05.127 "abort": true, 00:22:05.127 "seek_hole": false, 00:22:05.127 "seek_data": false, 00:22:05.127 "copy": true, 00:22:05.127 "nvme_iov_md": false 00:22:05.127 }, 00:22:05.127 "memory_domains": [ 00:22:05.127 { 00:22:05.127 "dma_device_id": "system", 00:22:05.127 "dma_device_type": 1 00:22:05.127 }, 00:22:05.127 { 00:22:05.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.127 "dma_device_type": 2 00:22:05.127 } 00:22:05.127 ], 00:22:05.127 "driver_specific": {} 00:22:05.127 }' 00:22:05.127 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.127 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.127 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.127 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:05.386 02:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.646 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.646 "name": "BaseBdev2", 00:22:05.646 "aliases": [ 00:22:05.646 "978040f4-a680-4aa6-907f-b385a948850a" 00:22:05.646 ], 00:22:05.646 "product_name": "Malloc disk", 00:22:05.646 "block_size": 512, 00:22:05.646 "num_blocks": 65536, 00:22:05.646 "uuid": "978040f4-a680-4aa6-907f-b385a948850a", 00:22:05.646 "assigned_rate_limits": { 00:22:05.646 "rw_ios_per_sec": 0, 00:22:05.646 "rw_mbytes_per_sec": 0, 00:22:05.646 "r_mbytes_per_sec": 0, 00:22:05.646 "w_mbytes_per_sec": 0 00:22:05.646 }, 00:22:05.646 "claimed": true, 00:22:05.646 "claim_type": "exclusive_write", 00:22:05.646 "zoned": false, 00:22:05.646 "supported_io_types": { 00:22:05.646 "read": true, 00:22:05.646 "write": true, 00:22:05.646 "unmap": true, 00:22:05.646 "flush": true, 00:22:05.646 "reset": true, 00:22:05.646 "nvme_admin": false, 00:22:05.646 "nvme_io": false, 00:22:05.646 "nvme_io_md": false, 00:22:05.646 "write_zeroes": true, 00:22:05.646 "zcopy": true, 00:22:05.646 "get_zone_info": false, 00:22:05.646 "zone_management": false, 00:22:05.646 "zone_append": false, 00:22:05.646 "compare": false, 00:22:05.646 "compare_and_write": false, 00:22:05.646 "abort": true, 00:22:05.646 "seek_hole": false, 00:22:05.646 "seek_data": false, 00:22:05.646 "copy": true, 00:22:05.646 "nvme_iov_md": false 00:22:05.646 }, 00:22:05.646 "memory_domains": [ 00:22:05.646 { 00:22:05.646 "dma_device_id": "system", 00:22:05.646 "dma_device_type": 1 00:22:05.646 }, 00:22:05.646 { 00:22:05.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.646 "dma_device_type": 2 00:22:05.646 } 00:22:05.646 ], 00:22:05.646 "driver_specific": {} 00:22:05.646 }' 00:22:05.646 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.906 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.166 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.166 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.166 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.166 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:06.166 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.426 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.426 "name": "BaseBdev3", 00:22:06.426 "aliases": [ 00:22:06.426 "ee5d2164-1282-4e03-87cb-281151160e0d" 00:22:06.426 ], 00:22:06.426 "product_name": "Malloc disk", 00:22:06.426 "block_size": 512, 00:22:06.426 "num_blocks": 65536, 00:22:06.426 "uuid": "ee5d2164-1282-4e03-87cb-281151160e0d", 00:22:06.426 "assigned_rate_limits": { 00:22:06.426 "rw_ios_per_sec": 0, 00:22:06.426 "rw_mbytes_per_sec": 0, 00:22:06.426 "r_mbytes_per_sec": 0, 00:22:06.426 "w_mbytes_per_sec": 0 00:22:06.426 }, 00:22:06.426 "claimed": true, 00:22:06.426 "claim_type": "exclusive_write", 00:22:06.426 "zoned": false, 00:22:06.426 "supported_io_types": { 00:22:06.426 "read": true, 00:22:06.426 "write": true, 00:22:06.426 "unmap": true, 00:22:06.426 "flush": true, 00:22:06.426 "reset": true, 00:22:06.426 "nvme_admin": false, 00:22:06.426 "nvme_io": false, 00:22:06.426 "nvme_io_md": false, 00:22:06.426 "write_zeroes": true, 00:22:06.426 "zcopy": true, 00:22:06.426 "get_zone_info": false, 00:22:06.426 "zone_management": false, 00:22:06.426 "zone_append": false, 00:22:06.426 "compare": false, 00:22:06.426 "compare_and_write": false, 00:22:06.426 "abort": true, 00:22:06.426 "seek_hole": false, 00:22:06.426 "seek_data": false, 00:22:06.426 "copy": true, 00:22:06.426 "nvme_iov_md": false 00:22:06.426 }, 00:22:06.426 "memory_domains": [ 00:22:06.426 { 00:22:06.426 "dma_device_id": "system", 00:22:06.426 "dma_device_type": 1 00:22:06.426 }, 00:22:06.426 { 00:22:06.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.426 "dma_device_type": 2 00:22:06.426 } 00:22:06.426 ], 00:22:06.426 "driver_specific": {} 00:22:06.426 }' 00:22:06.426 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.426 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.426 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.426 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.685 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.685 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.685 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.685 02:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.685 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.685 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.945 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.945 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.945 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.945 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:06.945 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:07.513 "name": "BaseBdev4", 00:22:07.513 "aliases": [ 00:22:07.513 "f920209a-1bcf-4c07-bb3b-34cfabf76579" 00:22:07.513 ], 00:22:07.513 "product_name": "Malloc disk", 00:22:07.513 "block_size": 512, 00:22:07.513 "num_blocks": 65536, 00:22:07.513 "uuid": "f920209a-1bcf-4c07-bb3b-34cfabf76579", 00:22:07.513 "assigned_rate_limits": { 00:22:07.513 "rw_ios_per_sec": 0, 00:22:07.513 "rw_mbytes_per_sec": 0, 00:22:07.513 "r_mbytes_per_sec": 0, 00:22:07.513 "w_mbytes_per_sec": 0 00:22:07.513 }, 00:22:07.513 "claimed": true, 00:22:07.513 "claim_type": "exclusive_write", 00:22:07.513 "zoned": false, 00:22:07.513 "supported_io_types": { 00:22:07.513 "read": true, 00:22:07.513 "write": true, 00:22:07.513 "unmap": true, 00:22:07.513 "flush": true, 00:22:07.513 "reset": true, 00:22:07.513 "nvme_admin": false, 00:22:07.513 "nvme_io": false, 00:22:07.513 "nvme_io_md": false, 00:22:07.513 "write_zeroes": true, 00:22:07.513 "zcopy": true, 00:22:07.513 "get_zone_info": false, 00:22:07.513 "zone_management": false, 00:22:07.513 "zone_append": false, 00:22:07.513 "compare": false, 00:22:07.513 "compare_and_write": false, 00:22:07.513 "abort": true, 00:22:07.513 "seek_hole": false, 00:22:07.513 "seek_data": false, 00:22:07.513 "copy": true, 00:22:07.513 "nvme_iov_md": false 00:22:07.513 }, 00:22:07.513 "memory_domains": [ 00:22:07.513 { 00:22:07.513 "dma_device_id": "system", 00:22:07.513 "dma_device_type": 1 00:22:07.513 }, 00:22:07.513 { 00:22:07.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.513 "dma_device_type": 2 00:22:07.513 } 00:22:07.513 ], 00:22:07.513 "driver_specific": {} 00:22:07.513 }' 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:07.513 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.772 02:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.772 02:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:07.772 02:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.772 02:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.772 02:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:07.772 02:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:08.031 [2024-07-11 02:27:58.320873] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.031 [2024-07-11 02:27:58.320899] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.031 [2024-07-11 02:27:58.320954] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.031 [2024-07-11 02:27:58.321014] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.031 [2024-07-11 02:27:58.321026] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1546ba0 name Existed_Raid, state offline 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1960919 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1960919 ']' 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1960919 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1960919 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1960919' 00:22:08.031 killing process with pid 1960919 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1960919 00:22:08.031 [2024-07-11 02:27:58.382509] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:08.031 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1960919 00:22:08.031 [2024-07-11 02:27:58.424182] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:08.291 02:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:08.291 00:22:08.291 real 0m35.099s 00:22:08.291 user 1m4.680s 00:22:08.291 sys 0m6.207s 00:22:08.291 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.291 02:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.291 ************************************ 00:22:08.291 END TEST raid_state_function_test_sb 00:22:08.291 ************************************ 00:22:08.291 02:27:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:08.291 02:27:58 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:08.291 02:27:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:08.291 02:27:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.291 02:27:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:08.291 ************************************ 00:22:08.291 START TEST raid_superblock_test 00:22:08.291 ************************************ 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:08.291 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:08.550 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1966040 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1966040 /var/tmp/spdk-raid.sock 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1966040 ']' 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:08.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.551 02:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.551 [2024-07-11 02:27:58.776402] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:08.551 [2024-07-11 02:27:58.776463] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966040 ] 00:22:08.551 [2024-07-11 02:27:58.914057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.551 [2024-07-11 02:27:58.966931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.811 [2024-07-11 02:27:59.039204] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.811 [2024-07-11 02:27:59.039233] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:09.070 02:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:09.071 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:09.071 malloc1 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:09.331 [2024-07-11 02:27:59.720899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:09.331 [2024-07-11 02:27:59.720945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.331 [2024-07-11 02:27:59.720965] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25f0de0 00:22:09.331 [2024-07-11 02:27:59.720978] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.331 [2024-07-11 02:27:59.722642] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.331 [2024-07-11 02:27:59.722671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:09.331 pt1 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:09.331 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:09.590 malloc2 00:22:09.590 02:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:09.849 [2024-07-11 02:28:00.214922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:09.849 [2024-07-11 02:28:00.214970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.849 [2024-07-11 02:28:00.214988] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e8380 00:22:09.849 [2024-07-11 02:28:00.215000] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.849 [2024-07-11 02:28:00.216552] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.849 [2024-07-11 02:28:00.216580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:09.849 pt2 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:09.849 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:10.108 malloc3 00:22:10.108 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:10.367 [2024-07-11 02:28:00.701877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:10.367 [2024-07-11 02:28:00.701921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.367 [2024-07-11 02:28:00.701940] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25eafb0 00:22:10.367 [2024-07-11 02:28:00.701957] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.367 [2024-07-11 02:28:00.703466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.367 [2024-07-11 02:28:00.703495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:10.367 pt3 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:10.367 02:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:10.934 malloc4 00:22:10.934 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:11.192 [2024-07-11 02:28:01.501664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:11.192 [2024-07-11 02:28:01.501708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.192 [2024-07-11 02:28:01.501729] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25ec760 00:22:11.192 [2024-07-11 02:28:01.501741] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.192 [2024-07-11 02:28:01.503250] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.192 [2024-07-11 02:28:01.503278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:11.192 pt4 00:22:11.192 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:11.192 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:11.192 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:11.452 [2024-07-11 02:28:01.746328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:11.452 [2024-07-11 02:28:01.747605] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:11.452 [2024-07-11 02:28:01.747657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:11.452 [2024-07-11 02:28:01.747699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:11.452 [2024-07-11 02:28:01.747872] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25ebcc0 00:22:11.452 [2024-07-11 02:28:01.747883] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:11.452 [2024-07-11 02:28:01.748073] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25edd00 00:22:11.452 [2024-07-11 02:28:01.748214] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25ebcc0 00:22:11.452 [2024-07-11 02:28:01.748224] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25ebcc0 00:22:11.452 [2024-07-11 02:28:01.748318] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.452 02:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.020 02:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.020 "name": "raid_bdev1", 00:22:12.020 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:12.020 "strip_size_kb": 64, 00:22:12.020 "state": "online", 00:22:12.020 "raid_level": "raid0", 00:22:12.020 "superblock": true, 00:22:12.020 "num_base_bdevs": 4, 00:22:12.020 "num_base_bdevs_discovered": 4, 00:22:12.020 "num_base_bdevs_operational": 4, 00:22:12.020 "base_bdevs_list": [ 00:22:12.020 { 00:22:12.020 "name": "pt1", 00:22:12.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:12.020 "is_configured": true, 00:22:12.020 "data_offset": 2048, 00:22:12.020 "data_size": 63488 00:22:12.020 }, 00:22:12.020 { 00:22:12.020 "name": "pt2", 00:22:12.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:12.020 "is_configured": true, 00:22:12.020 "data_offset": 2048, 00:22:12.020 "data_size": 63488 00:22:12.020 }, 00:22:12.020 { 00:22:12.020 "name": "pt3", 00:22:12.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:12.020 "is_configured": true, 00:22:12.020 "data_offset": 2048, 00:22:12.020 "data_size": 63488 00:22:12.020 }, 00:22:12.020 { 00:22:12.020 "name": "pt4", 00:22:12.020 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:12.020 "is_configured": true, 00:22:12.020 "data_offset": 2048, 00:22:12.020 "data_size": 63488 00:22:12.020 } 00:22:12.020 ] 00:22:12.020 }' 00:22:12.020 02:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.020 02:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:12.957 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:13.216 [2024-07-11 02:28:03.382997] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:13.216 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:13.216 "name": "raid_bdev1", 00:22:13.216 "aliases": [ 00:22:13.216 "0cf6e75f-8902-40dc-b976-ed195789b433" 00:22:13.216 ], 00:22:13.216 "product_name": "Raid Volume", 00:22:13.216 "block_size": 512, 00:22:13.216 "num_blocks": 253952, 00:22:13.216 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:13.216 "assigned_rate_limits": { 00:22:13.216 "rw_ios_per_sec": 0, 00:22:13.216 "rw_mbytes_per_sec": 0, 00:22:13.216 "r_mbytes_per_sec": 0, 00:22:13.216 "w_mbytes_per_sec": 0 00:22:13.216 }, 00:22:13.216 "claimed": false, 00:22:13.216 "zoned": false, 00:22:13.216 "supported_io_types": { 00:22:13.216 "read": true, 00:22:13.216 "write": true, 00:22:13.216 "unmap": true, 00:22:13.216 "flush": true, 00:22:13.216 "reset": true, 00:22:13.216 "nvme_admin": false, 00:22:13.216 "nvme_io": false, 00:22:13.216 "nvme_io_md": false, 00:22:13.216 "write_zeroes": true, 00:22:13.216 "zcopy": false, 00:22:13.216 "get_zone_info": false, 00:22:13.216 "zone_management": false, 00:22:13.216 "zone_append": false, 00:22:13.216 "compare": false, 00:22:13.216 "compare_and_write": false, 00:22:13.216 "abort": false, 00:22:13.216 "seek_hole": false, 00:22:13.216 "seek_data": false, 00:22:13.216 "copy": false, 00:22:13.216 "nvme_iov_md": false 00:22:13.216 }, 00:22:13.216 "memory_domains": [ 00:22:13.216 { 00:22:13.216 "dma_device_id": "system", 00:22:13.216 "dma_device_type": 1 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.216 "dma_device_type": 2 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "dma_device_id": "system", 00:22:13.216 "dma_device_type": 1 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.216 "dma_device_type": 2 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "dma_device_id": "system", 00:22:13.216 "dma_device_type": 1 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.216 "dma_device_type": 2 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "dma_device_id": "system", 00:22:13.216 "dma_device_type": 1 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.216 "dma_device_type": 2 00:22:13.216 } 00:22:13.216 ], 00:22:13.216 "driver_specific": { 00:22:13.216 "raid": { 00:22:13.216 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:13.216 "strip_size_kb": 64, 00:22:13.216 "state": "online", 00:22:13.216 "raid_level": "raid0", 00:22:13.216 "superblock": true, 00:22:13.216 "num_base_bdevs": 4, 00:22:13.216 "num_base_bdevs_discovered": 4, 00:22:13.216 "num_base_bdevs_operational": 4, 00:22:13.216 "base_bdevs_list": [ 00:22:13.216 { 00:22:13.216 "name": "pt1", 00:22:13.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.216 "is_configured": true, 00:22:13.216 "data_offset": 2048, 00:22:13.216 "data_size": 63488 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "name": "pt2", 00:22:13.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.216 "is_configured": true, 00:22:13.216 "data_offset": 2048, 00:22:13.216 "data_size": 63488 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "name": "pt3", 00:22:13.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:13.216 "is_configured": true, 00:22:13.216 "data_offset": 2048, 00:22:13.216 "data_size": 63488 00:22:13.216 }, 00:22:13.216 { 00:22:13.216 "name": "pt4", 00:22:13.216 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:13.216 "is_configured": true, 00:22:13.216 "data_offset": 2048, 00:22:13.216 "data_size": 63488 00:22:13.216 } 00:22:13.216 ] 00:22:13.216 } 00:22:13.216 } 00:22:13.216 }' 00:22:13.216 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:13.216 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:13.216 pt2 00:22:13.216 pt3 00:22:13.216 pt4' 00:22:13.216 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.216 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:13.216 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:13.475 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:13.475 "name": "pt1", 00:22:13.475 "aliases": [ 00:22:13.475 "00000000-0000-0000-0000-000000000001" 00:22:13.475 ], 00:22:13.475 "product_name": "passthru", 00:22:13.475 "block_size": 512, 00:22:13.475 "num_blocks": 65536, 00:22:13.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.475 "assigned_rate_limits": { 00:22:13.475 "rw_ios_per_sec": 0, 00:22:13.475 "rw_mbytes_per_sec": 0, 00:22:13.475 "r_mbytes_per_sec": 0, 00:22:13.475 "w_mbytes_per_sec": 0 00:22:13.475 }, 00:22:13.475 "claimed": true, 00:22:13.475 "claim_type": "exclusive_write", 00:22:13.475 "zoned": false, 00:22:13.475 "supported_io_types": { 00:22:13.475 "read": true, 00:22:13.475 "write": true, 00:22:13.475 "unmap": true, 00:22:13.475 "flush": true, 00:22:13.475 "reset": true, 00:22:13.475 "nvme_admin": false, 00:22:13.475 "nvme_io": false, 00:22:13.475 "nvme_io_md": false, 00:22:13.475 "write_zeroes": true, 00:22:13.475 "zcopy": true, 00:22:13.475 "get_zone_info": false, 00:22:13.475 "zone_management": false, 00:22:13.475 "zone_append": false, 00:22:13.475 "compare": false, 00:22:13.475 "compare_and_write": false, 00:22:13.475 "abort": true, 00:22:13.475 "seek_hole": false, 00:22:13.475 "seek_data": false, 00:22:13.475 "copy": true, 00:22:13.475 "nvme_iov_md": false 00:22:13.475 }, 00:22:13.476 "memory_domains": [ 00:22:13.476 { 00:22:13.476 "dma_device_id": "system", 00:22:13.476 "dma_device_type": 1 00:22:13.476 }, 00:22:13.476 { 00:22:13.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.476 "dma_device_type": 2 00:22:13.476 } 00:22:13.476 ], 00:22:13.476 "driver_specific": { 00:22:13.476 "passthru": { 00:22:13.476 "name": "pt1", 00:22:13.476 "base_bdev_name": "malloc1" 00:22:13.476 } 00:22:13.476 } 00:22:13.476 }' 00:22:13.476 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.476 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.476 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:13.476 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.476 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.476 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:13.476 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.734 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.734 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:13.734 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.734 02:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.734 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:13.734 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.734 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:13.734 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:13.992 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:13.992 "name": "pt2", 00:22:13.992 "aliases": [ 00:22:13.992 "00000000-0000-0000-0000-000000000002" 00:22:13.992 ], 00:22:13.992 "product_name": "passthru", 00:22:13.992 "block_size": 512, 00:22:13.992 "num_blocks": 65536, 00:22:13.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.992 "assigned_rate_limits": { 00:22:13.992 "rw_ios_per_sec": 0, 00:22:13.992 "rw_mbytes_per_sec": 0, 00:22:13.992 "r_mbytes_per_sec": 0, 00:22:13.992 "w_mbytes_per_sec": 0 00:22:13.992 }, 00:22:13.992 "claimed": true, 00:22:13.993 "claim_type": "exclusive_write", 00:22:13.993 "zoned": false, 00:22:13.993 "supported_io_types": { 00:22:13.993 "read": true, 00:22:13.993 "write": true, 00:22:13.993 "unmap": true, 00:22:13.993 "flush": true, 00:22:13.993 "reset": true, 00:22:13.993 "nvme_admin": false, 00:22:13.993 "nvme_io": false, 00:22:13.993 "nvme_io_md": false, 00:22:13.993 "write_zeroes": true, 00:22:13.993 "zcopy": true, 00:22:13.993 "get_zone_info": false, 00:22:13.993 "zone_management": false, 00:22:13.993 "zone_append": false, 00:22:13.993 "compare": false, 00:22:13.993 "compare_and_write": false, 00:22:13.993 "abort": true, 00:22:13.993 "seek_hole": false, 00:22:13.993 "seek_data": false, 00:22:13.993 "copy": true, 00:22:13.993 "nvme_iov_md": false 00:22:13.993 }, 00:22:13.993 "memory_domains": [ 00:22:13.993 { 00:22:13.993 "dma_device_id": "system", 00:22:13.993 "dma_device_type": 1 00:22:13.993 }, 00:22:13.993 { 00:22:13.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.993 "dma_device_type": 2 00:22:13.993 } 00:22:13.993 ], 00:22:13.993 "driver_specific": { 00:22:13.993 "passthru": { 00:22:13.993 "name": "pt2", 00:22:13.993 "base_bdev_name": "malloc2" 00:22:13.993 } 00:22:13.993 } 00:22:13.993 }' 00:22:13.993 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.993 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.993 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:13.993 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.993 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:14.251 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.510 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.510 "name": "pt3", 00:22:14.510 "aliases": [ 00:22:14.510 "00000000-0000-0000-0000-000000000003" 00:22:14.510 ], 00:22:14.510 "product_name": "passthru", 00:22:14.510 "block_size": 512, 00:22:14.510 "num_blocks": 65536, 00:22:14.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:14.510 "assigned_rate_limits": { 00:22:14.510 "rw_ios_per_sec": 0, 00:22:14.510 "rw_mbytes_per_sec": 0, 00:22:14.510 "r_mbytes_per_sec": 0, 00:22:14.510 "w_mbytes_per_sec": 0 00:22:14.510 }, 00:22:14.510 "claimed": true, 00:22:14.510 "claim_type": "exclusive_write", 00:22:14.510 "zoned": false, 00:22:14.510 "supported_io_types": { 00:22:14.510 "read": true, 00:22:14.510 "write": true, 00:22:14.510 "unmap": true, 00:22:14.510 "flush": true, 00:22:14.510 "reset": true, 00:22:14.510 "nvme_admin": false, 00:22:14.510 "nvme_io": false, 00:22:14.510 "nvme_io_md": false, 00:22:14.510 "write_zeroes": true, 00:22:14.510 "zcopy": true, 00:22:14.510 "get_zone_info": false, 00:22:14.510 "zone_management": false, 00:22:14.510 "zone_append": false, 00:22:14.510 "compare": false, 00:22:14.510 "compare_and_write": false, 00:22:14.510 "abort": true, 00:22:14.510 "seek_hole": false, 00:22:14.510 "seek_data": false, 00:22:14.510 "copy": true, 00:22:14.510 "nvme_iov_md": false 00:22:14.510 }, 00:22:14.510 "memory_domains": [ 00:22:14.510 { 00:22:14.510 "dma_device_id": "system", 00:22:14.510 "dma_device_type": 1 00:22:14.510 }, 00:22:14.510 { 00:22:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.510 "dma_device_type": 2 00:22:14.510 } 00:22:14.510 ], 00:22:14.510 "driver_specific": { 00:22:14.510 "passthru": { 00:22:14.510 "name": "pt3", 00:22:14.510 "base_bdev_name": "malloc3" 00:22:14.510 } 00:22:14.510 } 00:22:14.510 }' 00:22:14.510 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.510 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.768 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.768 02:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.768 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.768 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.768 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.768 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.768 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.768 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.768 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.027 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.027 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.027 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:15.027 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.286 "name": "pt4", 00:22:15.286 "aliases": [ 00:22:15.286 "00000000-0000-0000-0000-000000000004" 00:22:15.286 ], 00:22:15.286 "product_name": "passthru", 00:22:15.286 "block_size": 512, 00:22:15.286 "num_blocks": 65536, 00:22:15.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:15.286 "assigned_rate_limits": { 00:22:15.286 "rw_ios_per_sec": 0, 00:22:15.286 "rw_mbytes_per_sec": 0, 00:22:15.286 "r_mbytes_per_sec": 0, 00:22:15.286 "w_mbytes_per_sec": 0 00:22:15.286 }, 00:22:15.286 "claimed": true, 00:22:15.286 "claim_type": "exclusive_write", 00:22:15.286 "zoned": false, 00:22:15.286 "supported_io_types": { 00:22:15.286 "read": true, 00:22:15.286 "write": true, 00:22:15.286 "unmap": true, 00:22:15.286 "flush": true, 00:22:15.286 "reset": true, 00:22:15.286 "nvme_admin": false, 00:22:15.286 "nvme_io": false, 00:22:15.286 "nvme_io_md": false, 00:22:15.286 "write_zeroes": true, 00:22:15.286 "zcopy": true, 00:22:15.286 "get_zone_info": false, 00:22:15.286 "zone_management": false, 00:22:15.286 "zone_append": false, 00:22:15.286 "compare": false, 00:22:15.286 "compare_and_write": false, 00:22:15.286 "abort": true, 00:22:15.286 "seek_hole": false, 00:22:15.286 "seek_data": false, 00:22:15.286 "copy": true, 00:22:15.286 "nvme_iov_md": false 00:22:15.286 }, 00:22:15.286 "memory_domains": [ 00:22:15.286 { 00:22:15.286 "dma_device_id": "system", 00:22:15.286 "dma_device_type": 1 00:22:15.286 }, 00:22:15.286 { 00:22:15.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.286 "dma_device_type": 2 00:22:15.286 } 00:22:15.286 ], 00:22:15.286 "driver_specific": { 00:22:15.286 "passthru": { 00:22:15.286 "name": "pt4", 00:22:15.286 "base_bdev_name": "malloc4" 00:22:15.286 } 00:22:15.286 } 00:22:15.286 }' 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.286 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.546 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.546 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.546 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.546 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.546 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:15.546 02:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:15.805 [2024-07-11 02:28:06.062120] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.805 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0cf6e75f-8902-40dc-b976-ed195789b433 00:22:15.805 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0cf6e75f-8902-40dc-b976-ed195789b433 ']' 00:22:15.805 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:16.064 [2024-07-11 02:28:06.306442] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.064 [2024-07-11 02:28:06.306463] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.064 [2024-07-11 02:28:06.306512] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.064 [2024-07-11 02:28:06.306572] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.064 [2024-07-11 02:28:06.306584] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25ebcc0 name raid_bdev1, state offline 00:22:16.064 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.064 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:16.322 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:16.322 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:16.322 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:16.322 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:16.591 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:16.591 02:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:16.852 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:16.853 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:17.111 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.111 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:17.111 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:17.111 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:17.370 02:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:17.671 [2024-07-11 02:28:08.002968] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:17.671 [2024-07-11 02:28:08.004318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:17.671 [2024-07-11 02:28:08.004360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:17.671 [2024-07-11 02:28:08.004395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:17.671 [2024-07-11 02:28:08.004438] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:17.671 [2024-07-11 02:28:08.004476] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:17.671 [2024-07-11 02:28:08.004498] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:17.671 [2024-07-11 02:28:08.004519] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:17.671 [2024-07-11 02:28:08.004537] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.671 [2024-07-11 02:28:08.004547] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2440570 name raid_bdev1, state configuring 00:22:17.671 request: 00:22:17.671 { 00:22:17.671 "name": "raid_bdev1", 00:22:17.671 "raid_level": "raid0", 00:22:17.671 "base_bdevs": [ 00:22:17.671 "malloc1", 00:22:17.671 "malloc2", 00:22:17.671 "malloc3", 00:22:17.671 "malloc4" 00:22:17.671 ], 00:22:17.671 "strip_size_kb": 64, 00:22:17.671 "superblock": false, 00:22:17.671 "method": "bdev_raid_create", 00:22:17.671 "req_id": 1 00:22:17.671 } 00:22:17.671 Got JSON-RPC error response 00:22:17.671 response: 00:22:17.671 { 00:22:17.671 "code": -17, 00:22:17.671 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:17.671 } 00:22:17.671 02:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:17.671 02:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:17.671 02:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:17.671 02:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:17.671 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.671 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:17.931 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:17.932 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:17.932 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:18.190 [2024-07-11 02:28:08.500215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:18.190 [2024-07-11 02:28:08.500262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.190 [2024-07-11 02:28:08.500280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25ef6e0 00:22:18.190 [2024-07-11 02:28:08.500292] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.190 [2024-07-11 02:28:08.501892] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.190 [2024-07-11 02:28:08.501920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:18.190 [2024-07-11 02:28:08.501987] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:18.190 [2024-07-11 02:28:08.502013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.190 pt1 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.190 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.448 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.448 "name": "raid_bdev1", 00:22:18.448 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:18.448 "strip_size_kb": 64, 00:22:18.448 "state": "configuring", 00:22:18.448 "raid_level": "raid0", 00:22:18.448 "superblock": true, 00:22:18.448 "num_base_bdevs": 4, 00:22:18.448 "num_base_bdevs_discovered": 1, 00:22:18.448 "num_base_bdevs_operational": 4, 00:22:18.448 "base_bdevs_list": [ 00:22:18.448 { 00:22:18.448 "name": "pt1", 00:22:18.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.449 "is_configured": true, 00:22:18.449 "data_offset": 2048, 00:22:18.449 "data_size": 63488 00:22:18.449 }, 00:22:18.449 { 00:22:18.449 "name": null, 00:22:18.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.449 "is_configured": false, 00:22:18.449 "data_offset": 2048, 00:22:18.449 "data_size": 63488 00:22:18.449 }, 00:22:18.449 { 00:22:18.449 "name": null, 00:22:18.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.449 "is_configured": false, 00:22:18.449 "data_offset": 2048, 00:22:18.449 "data_size": 63488 00:22:18.449 }, 00:22:18.449 { 00:22:18.449 "name": null, 00:22:18.449 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.449 "is_configured": false, 00:22:18.449 "data_offset": 2048, 00:22:18.449 "data_size": 63488 00:22:18.449 } 00:22:18.449 ] 00:22:18.449 }' 00:22:18.449 02:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.449 02:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.015 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:22:19.015 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:19.273 [2024-07-11 02:28:09.623199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:19.273 [2024-07-11 02:28:09.623251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.273 [2024-07-11 02:28:09.623271] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e9cb0 00:22:19.273 [2024-07-11 02:28:09.623283] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.273 [2024-07-11 02:28:09.623608] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.273 [2024-07-11 02:28:09.623625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:19.273 [2024-07-11 02:28:09.623687] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:19.273 [2024-07-11 02:28:09.623705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:19.273 pt2 00:22:19.273 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:19.531 [2024-07-11 02:28:09.879902] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.531 02:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.790 02:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.790 "name": "raid_bdev1", 00:22:19.790 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:19.790 "strip_size_kb": 64, 00:22:19.790 "state": "configuring", 00:22:19.790 "raid_level": "raid0", 00:22:19.790 "superblock": true, 00:22:19.790 "num_base_bdevs": 4, 00:22:19.790 "num_base_bdevs_discovered": 1, 00:22:19.790 "num_base_bdevs_operational": 4, 00:22:19.790 "base_bdevs_list": [ 00:22:19.790 { 00:22:19.790 "name": "pt1", 00:22:19.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.790 "is_configured": true, 00:22:19.790 "data_offset": 2048, 00:22:19.790 "data_size": 63488 00:22:19.790 }, 00:22:19.790 { 00:22:19.790 "name": null, 00:22:19.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.790 "is_configured": false, 00:22:19.790 "data_offset": 2048, 00:22:19.790 "data_size": 63488 00:22:19.790 }, 00:22:19.790 { 00:22:19.790 "name": null, 00:22:19.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.790 "is_configured": false, 00:22:19.790 "data_offset": 2048, 00:22:19.790 "data_size": 63488 00:22:19.790 }, 00:22:19.790 { 00:22:19.790 "name": null, 00:22:19.790 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.790 "is_configured": false, 00:22:19.790 "data_offset": 2048, 00:22:19.790 "data_size": 63488 00:22:19.790 } 00:22:19.790 ] 00:22:19.790 }' 00:22:19.790 02:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.790 02:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.358 02:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:20.358 02:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:20.358 02:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.617 [2024-07-11 02:28:10.994850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.617 [2024-07-11 02:28:10.994898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.617 [2024-07-11 02:28:10.994917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25f1120 00:22:20.617 [2024-07-11 02:28:10.994930] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.617 [2024-07-11 02:28:10.995254] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.617 [2024-07-11 02:28:10.995272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.617 [2024-07-11 02:28:10.995331] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:20.617 [2024-07-11 02:28:10.995349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.617 pt2 00:22:20.617 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:20.617 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:20.617 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:20.877 [2024-07-11 02:28:11.255526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:20.877 [2024-07-11 02:28:11.255568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.877 [2024-07-11 02:28:11.255585] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25ee760 00:22:20.877 [2024-07-11 02:28:11.255597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.877 [2024-07-11 02:28:11.255933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.877 [2024-07-11 02:28:11.255951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:20.877 [2024-07-11 02:28:11.256008] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:20.877 [2024-07-11 02:28:11.256026] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:20.877 pt3 00:22:20.877 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:20.877 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:20.877 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:21.137 [2024-07-11 02:28:11.516224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:21.137 [2024-07-11 02:28:11.516262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.137 [2024-07-11 02:28:11.516277] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25efce0 00:22:21.137 [2024-07-11 02:28:11.516289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.137 [2024-07-11 02:28:11.516578] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.137 [2024-07-11 02:28:11.516595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:21.137 [2024-07-11 02:28:11.516648] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:21.137 [2024-07-11 02:28:11.516666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:21.137 [2024-07-11 02:28:11.516791] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25f0140 00:22:21.137 [2024-07-11 02:28:11.516802] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:21.137 [2024-07-11 02:28:11.516969] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2440460 00:22:21.137 [2024-07-11 02:28:11.517095] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25f0140 00:22:21.137 [2024-07-11 02:28:11.517105] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25f0140 00:22:21.137 [2024-07-11 02:28:11.517204] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.137 pt4 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.137 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.396 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.396 "name": "raid_bdev1", 00:22:21.396 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:21.396 "strip_size_kb": 64, 00:22:21.396 "state": "online", 00:22:21.396 "raid_level": "raid0", 00:22:21.396 "superblock": true, 00:22:21.396 "num_base_bdevs": 4, 00:22:21.396 "num_base_bdevs_discovered": 4, 00:22:21.396 "num_base_bdevs_operational": 4, 00:22:21.396 "base_bdevs_list": [ 00:22:21.396 { 00:22:21.396 "name": "pt1", 00:22:21.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:21.396 "is_configured": true, 00:22:21.397 "data_offset": 2048, 00:22:21.397 "data_size": 63488 00:22:21.397 }, 00:22:21.397 { 00:22:21.397 "name": "pt2", 00:22:21.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:21.397 "is_configured": true, 00:22:21.397 "data_offset": 2048, 00:22:21.397 "data_size": 63488 00:22:21.397 }, 00:22:21.397 { 00:22:21.397 "name": "pt3", 00:22:21.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.397 "is_configured": true, 00:22:21.397 "data_offset": 2048, 00:22:21.397 "data_size": 63488 00:22:21.397 }, 00:22:21.397 { 00:22:21.397 "name": "pt4", 00:22:21.397 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:21.397 "is_configured": true, 00:22:21.397 "data_offset": 2048, 00:22:21.397 "data_size": 63488 00:22:21.397 } 00:22:21.397 ] 00:22:21.397 }' 00:22:21.397 02:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.397 02:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:22.329 [2024-07-11 02:28:12.679631] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:22.329 "name": "raid_bdev1", 00:22:22.329 "aliases": [ 00:22:22.329 "0cf6e75f-8902-40dc-b976-ed195789b433" 00:22:22.329 ], 00:22:22.329 "product_name": "Raid Volume", 00:22:22.329 "block_size": 512, 00:22:22.329 "num_blocks": 253952, 00:22:22.329 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:22.329 "assigned_rate_limits": { 00:22:22.329 "rw_ios_per_sec": 0, 00:22:22.329 "rw_mbytes_per_sec": 0, 00:22:22.329 "r_mbytes_per_sec": 0, 00:22:22.329 "w_mbytes_per_sec": 0 00:22:22.329 }, 00:22:22.329 "claimed": false, 00:22:22.329 "zoned": false, 00:22:22.329 "supported_io_types": { 00:22:22.329 "read": true, 00:22:22.329 "write": true, 00:22:22.329 "unmap": true, 00:22:22.329 "flush": true, 00:22:22.329 "reset": true, 00:22:22.329 "nvme_admin": false, 00:22:22.329 "nvme_io": false, 00:22:22.329 "nvme_io_md": false, 00:22:22.329 "write_zeroes": true, 00:22:22.329 "zcopy": false, 00:22:22.329 "get_zone_info": false, 00:22:22.329 "zone_management": false, 00:22:22.329 "zone_append": false, 00:22:22.329 "compare": false, 00:22:22.329 "compare_and_write": false, 00:22:22.329 "abort": false, 00:22:22.329 "seek_hole": false, 00:22:22.329 "seek_data": false, 00:22:22.329 "copy": false, 00:22:22.329 "nvme_iov_md": false 00:22:22.329 }, 00:22:22.329 "memory_domains": [ 00:22:22.329 { 00:22:22.329 "dma_device_id": "system", 00:22:22.329 "dma_device_type": 1 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.329 "dma_device_type": 2 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "dma_device_id": "system", 00:22:22.329 "dma_device_type": 1 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.329 "dma_device_type": 2 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "dma_device_id": "system", 00:22:22.329 "dma_device_type": 1 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.329 "dma_device_type": 2 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "dma_device_id": "system", 00:22:22.329 "dma_device_type": 1 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.329 "dma_device_type": 2 00:22:22.329 } 00:22:22.329 ], 00:22:22.329 "driver_specific": { 00:22:22.329 "raid": { 00:22:22.329 "uuid": "0cf6e75f-8902-40dc-b976-ed195789b433", 00:22:22.329 "strip_size_kb": 64, 00:22:22.329 "state": "online", 00:22:22.329 "raid_level": "raid0", 00:22:22.329 "superblock": true, 00:22:22.329 "num_base_bdevs": 4, 00:22:22.329 "num_base_bdevs_discovered": 4, 00:22:22.329 "num_base_bdevs_operational": 4, 00:22:22.329 "base_bdevs_list": [ 00:22:22.329 { 00:22:22.329 "name": "pt1", 00:22:22.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:22.329 "is_configured": true, 00:22:22.329 "data_offset": 2048, 00:22:22.329 "data_size": 63488 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "name": "pt2", 00:22:22.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:22.329 "is_configured": true, 00:22:22.329 "data_offset": 2048, 00:22:22.329 "data_size": 63488 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "name": "pt3", 00:22:22.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:22.329 "is_configured": true, 00:22:22.329 "data_offset": 2048, 00:22:22.329 "data_size": 63488 00:22:22.329 }, 00:22:22.329 { 00:22:22.329 "name": "pt4", 00:22:22.329 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:22.329 "is_configured": true, 00:22:22.329 "data_offset": 2048, 00:22:22.329 "data_size": 63488 00:22:22.329 } 00:22:22.329 ] 00:22:22.329 } 00:22:22.329 } 00:22:22.329 }' 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:22.329 pt2 00:22:22.329 pt3 00:22:22.329 pt4' 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:22.329 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:22.587 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:22.587 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:22.587 "name": "pt1", 00:22:22.587 "aliases": [ 00:22:22.587 "00000000-0000-0000-0000-000000000001" 00:22:22.587 ], 00:22:22.587 "product_name": "passthru", 00:22:22.587 "block_size": 512, 00:22:22.587 "num_blocks": 65536, 00:22:22.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:22.587 "assigned_rate_limits": { 00:22:22.587 "rw_ios_per_sec": 0, 00:22:22.587 "rw_mbytes_per_sec": 0, 00:22:22.587 "r_mbytes_per_sec": 0, 00:22:22.587 "w_mbytes_per_sec": 0 00:22:22.587 }, 00:22:22.587 "claimed": true, 00:22:22.587 "claim_type": "exclusive_write", 00:22:22.587 "zoned": false, 00:22:22.587 "supported_io_types": { 00:22:22.587 "read": true, 00:22:22.587 "write": true, 00:22:22.587 "unmap": true, 00:22:22.587 "flush": true, 00:22:22.587 "reset": true, 00:22:22.587 "nvme_admin": false, 00:22:22.587 "nvme_io": false, 00:22:22.587 "nvme_io_md": false, 00:22:22.587 "write_zeroes": true, 00:22:22.587 "zcopy": true, 00:22:22.587 "get_zone_info": false, 00:22:22.587 "zone_management": false, 00:22:22.587 "zone_append": false, 00:22:22.587 "compare": false, 00:22:22.587 "compare_and_write": false, 00:22:22.587 "abort": true, 00:22:22.587 "seek_hole": false, 00:22:22.587 "seek_data": false, 00:22:22.587 "copy": true, 00:22:22.587 "nvme_iov_md": false 00:22:22.587 }, 00:22:22.587 "memory_domains": [ 00:22:22.587 { 00:22:22.587 "dma_device_id": "system", 00:22:22.587 "dma_device_type": 1 00:22:22.587 }, 00:22:22.587 { 00:22:22.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.587 "dma_device_type": 2 00:22:22.587 } 00:22:22.587 ], 00:22:22.587 "driver_specific": { 00:22:22.587 "passthru": { 00:22:22.587 "name": "pt1", 00:22:22.587 "base_bdev_name": "malloc1" 00:22:22.587 } 00:22:22.587 } 00:22:22.587 }' 00:22:22.587 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.587 02:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.844 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:22.844 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.844 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.845 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:22.845 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.845 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.845 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:22.845 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.845 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:23.101 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:23.101 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:23.101 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:23.101 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:23.365 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:23.365 "name": "pt2", 00:22:23.365 "aliases": [ 00:22:23.365 "00000000-0000-0000-0000-000000000002" 00:22:23.365 ], 00:22:23.365 "product_name": "passthru", 00:22:23.365 "block_size": 512, 00:22:23.365 "num_blocks": 65536, 00:22:23.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.365 "assigned_rate_limits": { 00:22:23.365 "rw_ios_per_sec": 0, 00:22:23.365 "rw_mbytes_per_sec": 0, 00:22:23.365 "r_mbytes_per_sec": 0, 00:22:23.365 "w_mbytes_per_sec": 0 00:22:23.365 }, 00:22:23.365 "claimed": true, 00:22:23.365 "claim_type": "exclusive_write", 00:22:23.365 "zoned": false, 00:22:23.365 "supported_io_types": { 00:22:23.365 "read": true, 00:22:23.365 "write": true, 00:22:23.365 "unmap": true, 00:22:23.365 "flush": true, 00:22:23.365 "reset": true, 00:22:23.366 "nvme_admin": false, 00:22:23.366 "nvme_io": false, 00:22:23.366 "nvme_io_md": false, 00:22:23.366 "write_zeroes": true, 00:22:23.366 "zcopy": true, 00:22:23.366 "get_zone_info": false, 00:22:23.366 "zone_management": false, 00:22:23.366 "zone_append": false, 00:22:23.366 "compare": false, 00:22:23.366 "compare_and_write": false, 00:22:23.366 "abort": true, 00:22:23.366 "seek_hole": false, 00:22:23.366 "seek_data": false, 00:22:23.366 "copy": true, 00:22:23.366 "nvme_iov_md": false 00:22:23.366 }, 00:22:23.366 "memory_domains": [ 00:22:23.366 { 00:22:23.366 "dma_device_id": "system", 00:22:23.366 "dma_device_type": 1 00:22:23.366 }, 00:22:23.366 { 00:22:23.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.366 "dma_device_type": 2 00:22:23.366 } 00:22:23.366 ], 00:22:23.366 "driver_specific": { 00:22:23.366 "passthru": { 00:22:23.366 "name": "pt2", 00:22:23.366 "base_bdev_name": "malloc2" 00:22:23.366 } 00:22:23.366 } 00:22:23.366 }' 00:22:23.366 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:23.366 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:23.366 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:23.366 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:23.366 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:23.366 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:23.366 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:23.625 02:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:23.883 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:23.883 "name": "pt3", 00:22:23.883 "aliases": [ 00:22:23.883 "00000000-0000-0000-0000-000000000003" 00:22:23.883 ], 00:22:23.883 "product_name": "passthru", 00:22:23.883 "block_size": 512, 00:22:23.883 "num_blocks": 65536, 00:22:23.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:23.883 "assigned_rate_limits": { 00:22:23.883 "rw_ios_per_sec": 0, 00:22:23.883 "rw_mbytes_per_sec": 0, 00:22:23.883 "r_mbytes_per_sec": 0, 00:22:23.883 "w_mbytes_per_sec": 0 00:22:23.883 }, 00:22:23.883 "claimed": true, 00:22:23.883 "claim_type": "exclusive_write", 00:22:23.883 "zoned": false, 00:22:23.883 "supported_io_types": { 00:22:23.883 "read": true, 00:22:23.883 "write": true, 00:22:23.883 "unmap": true, 00:22:23.883 "flush": true, 00:22:23.883 "reset": true, 00:22:23.883 "nvme_admin": false, 00:22:23.883 "nvme_io": false, 00:22:23.883 "nvme_io_md": false, 00:22:23.883 "write_zeroes": true, 00:22:23.883 "zcopy": true, 00:22:23.883 "get_zone_info": false, 00:22:23.883 "zone_management": false, 00:22:23.883 "zone_append": false, 00:22:23.883 "compare": false, 00:22:23.883 "compare_and_write": false, 00:22:23.883 "abort": true, 00:22:23.883 "seek_hole": false, 00:22:23.883 "seek_data": false, 00:22:23.883 "copy": true, 00:22:23.883 "nvme_iov_md": false 00:22:23.883 }, 00:22:23.883 "memory_domains": [ 00:22:23.883 { 00:22:23.883 "dma_device_id": "system", 00:22:23.883 "dma_device_type": 1 00:22:23.883 }, 00:22:23.883 { 00:22:23.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.883 "dma_device_type": 2 00:22:23.883 } 00:22:23.883 ], 00:22:23.883 "driver_specific": { 00:22:23.883 "passthru": { 00:22:23.883 "name": "pt3", 00:22:23.883 "base_bdev_name": "malloc3" 00:22:23.883 } 00:22:23.883 } 00:22:23.883 }' 00:22:23.883 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:23.883 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:23.883 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:23.883 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.182 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.182 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:24.182 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.182 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.182 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:24.182 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.183 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.183 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:24.183 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:24.183 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:24.183 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:24.465 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:24.465 "name": "pt4", 00:22:24.465 "aliases": [ 00:22:24.465 "00000000-0000-0000-0000-000000000004" 00:22:24.465 ], 00:22:24.465 "product_name": "passthru", 00:22:24.465 "block_size": 512, 00:22:24.465 "num_blocks": 65536, 00:22:24.465 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:24.465 "assigned_rate_limits": { 00:22:24.465 "rw_ios_per_sec": 0, 00:22:24.465 "rw_mbytes_per_sec": 0, 00:22:24.465 "r_mbytes_per_sec": 0, 00:22:24.465 "w_mbytes_per_sec": 0 00:22:24.465 }, 00:22:24.465 "claimed": true, 00:22:24.465 "claim_type": "exclusive_write", 00:22:24.465 "zoned": false, 00:22:24.465 "supported_io_types": { 00:22:24.465 "read": true, 00:22:24.465 "write": true, 00:22:24.465 "unmap": true, 00:22:24.465 "flush": true, 00:22:24.465 "reset": true, 00:22:24.465 "nvme_admin": false, 00:22:24.465 "nvme_io": false, 00:22:24.465 "nvme_io_md": false, 00:22:24.465 "write_zeroes": true, 00:22:24.465 "zcopy": true, 00:22:24.465 "get_zone_info": false, 00:22:24.465 "zone_management": false, 00:22:24.465 "zone_append": false, 00:22:24.465 "compare": false, 00:22:24.465 "compare_and_write": false, 00:22:24.465 "abort": true, 00:22:24.465 "seek_hole": false, 00:22:24.465 "seek_data": false, 00:22:24.465 "copy": true, 00:22:24.466 "nvme_iov_md": false 00:22:24.466 }, 00:22:24.466 "memory_domains": [ 00:22:24.466 { 00:22:24.466 "dma_device_id": "system", 00:22:24.466 "dma_device_type": 1 00:22:24.466 }, 00:22:24.466 { 00:22:24.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.466 "dma_device_type": 2 00:22:24.466 } 00:22:24.466 ], 00:22:24.466 "driver_specific": { 00:22:24.466 "passthru": { 00:22:24.466 "name": "pt4", 00:22:24.466 "base_bdev_name": "malloc4" 00:22:24.466 } 00:22:24.466 } 00:22:24.466 }' 00:22:24.466 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.466 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.466 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:24.466 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.726 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.726 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:24.726 02:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.726 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.726 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:24.726 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.726 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.726 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:24.726 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:24.726 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:24.985 [2024-07-11 02:28:15.370820] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.985 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0cf6e75f-8902-40dc-b976-ed195789b433 '!=' 0cf6e75f-8902-40dc-b976-ed195789b433 ']' 00:22:24.985 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1966040 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1966040 ']' 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1966040 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1966040 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1966040' 00:22:25.243 killing process with pid 1966040 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1966040 00:22:25.243 [2024-07-11 02:28:15.462365] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.243 [2024-07-11 02:28:15.462423] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.243 [2024-07-11 02:28:15.462488] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.243 [2024-07-11 02:28:15.462501] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25f0140 name raid_bdev1, state offline 00:22:25.243 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1966040 00:22:25.243 [2024-07-11 02:28:15.500836] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:25.503 02:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:25.503 00:22:25.503 real 0m16.987s 00:22:25.503 user 0m31.163s 00:22:25.503 sys 0m3.101s 00:22:25.503 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:25.503 02:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.503 ************************************ 00:22:25.503 END TEST raid_superblock_test 00:22:25.503 ************************************ 00:22:25.503 02:28:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:25.503 02:28:15 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:22:25.503 02:28:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:25.503 02:28:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.503 02:28:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:25.503 ************************************ 00:22:25.503 START TEST raid_read_error_test 00:22:25.503 ************************************ 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.vpU6GysjRa 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1968822 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1968822 /var/tmp/spdk-raid.sock 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1968822 ']' 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:25.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.503 02:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.503 [2024-07-11 02:28:15.874187] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:25.503 [2024-07-11 02:28:15.874256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968822 ] 00:22:25.762 [2024-07-11 02:28:16.013538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.762 [2024-07-11 02:28:16.066862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.762 [2024-07-11 02:28:16.129436] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.762 [2024-07-11 02:28:16.129475] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:26.697 02:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.697 02:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:26.697 02:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:26.697 02:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:26.697 BaseBdev1_malloc 00:22:26.697 02:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:26.956 true 00:22:26.956 02:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:27.214 [2024-07-11 02:28:17.543897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:27.214 [2024-07-11 02:28:17.543945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.214 [2024-07-11 02:28:17.543964] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f8330 00:22:27.214 [2024-07-11 02:28:17.543977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.214 [2024-07-11 02:28:17.545766] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.214 [2024-07-11 02:28:17.545796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:27.214 BaseBdev1 00:22:27.214 02:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:27.214 02:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:27.473 BaseBdev2_malloc 00:22:27.473 02:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:27.732 true 00:22:27.732 02:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:27.991 [2024-07-11 02:28:18.330493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:27.991 [2024-07-11 02:28:18.330537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.991 [2024-07-11 02:28:18.330558] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f1b40 00:22:27.991 [2024-07-11 02:28:18.330571] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.991 [2024-07-11 02:28:18.332030] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.991 [2024-07-11 02:28:18.332062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:27.991 BaseBdev2 00:22:27.991 02:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:27.991 02:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:28.249 BaseBdev3_malloc 00:22:28.250 02:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:28.509 true 00:22:28.509 02:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:28.768 [2024-07-11 02:28:19.016778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:28.768 [2024-07-11 02:28:19.016823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.768 [2024-07-11 02:28:19.016843] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f50f0 00:22:28.768 [2024-07-11 02:28:19.016856] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.768 [2024-07-11 02:28:19.018281] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.768 [2024-07-11 02:28:19.018309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:28.768 BaseBdev3 00:22:28.768 02:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:28.768 02:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:29.027 BaseBdev4_malloc 00:22:29.027 02:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:29.593 true 00:22:29.593 02:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:29.852 [2024-07-11 02:28:20.052104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:29.852 [2024-07-11 02:28:20.052153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.852 [2024-07-11 02:28:20.052178] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25464c0 00:22:29.852 [2024-07-11 02:28:20.052191] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.852 [2024-07-11 02:28:20.053784] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.852 [2024-07-11 02:28:20.053811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:29.852 BaseBdev4 00:22:29.852 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:30.111 [2024-07-11 02:28:20.300797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.111 [2024-07-11 02:28:20.302114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.111 [2024-07-11 02:28:20.302182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:30.111 [2024-07-11 02:28:20.302241] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:30.111 [2024-07-11 02:28:20.302466] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26ed890 00:22:30.111 [2024-07-11 02:28:20.302477] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:30.111 [2024-07-11 02:28:20.302680] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26ed860 00:22:30.111 [2024-07-11 02:28:20.302849] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26ed890 00:22:30.111 [2024-07-11 02:28:20.302864] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26ed890 00:22:30.111 [2024-07-11 02:28:20.302970] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.111 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.370 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.370 "name": "raid_bdev1", 00:22:30.370 "uuid": "516bfc0c-85e5-42ce-8ee6-455d330867d8", 00:22:30.370 "strip_size_kb": 64, 00:22:30.370 "state": "online", 00:22:30.370 "raid_level": "raid0", 00:22:30.370 "superblock": true, 00:22:30.370 "num_base_bdevs": 4, 00:22:30.370 "num_base_bdevs_discovered": 4, 00:22:30.370 "num_base_bdevs_operational": 4, 00:22:30.370 "base_bdevs_list": [ 00:22:30.370 { 00:22:30.370 "name": "BaseBdev1", 00:22:30.370 "uuid": "ecd5570b-114e-5127-9a35-e61e112db928", 00:22:30.370 "is_configured": true, 00:22:30.370 "data_offset": 2048, 00:22:30.370 "data_size": 63488 00:22:30.370 }, 00:22:30.370 { 00:22:30.370 "name": "BaseBdev2", 00:22:30.370 "uuid": "a4866f34-f90f-55dc-975f-b9e4acbcb33a", 00:22:30.370 "is_configured": true, 00:22:30.370 "data_offset": 2048, 00:22:30.370 "data_size": 63488 00:22:30.370 }, 00:22:30.370 { 00:22:30.370 "name": "BaseBdev3", 00:22:30.370 "uuid": "c06b5097-abea-589d-b729-a3a5fc02d853", 00:22:30.370 "is_configured": true, 00:22:30.370 "data_offset": 2048, 00:22:30.370 "data_size": 63488 00:22:30.370 }, 00:22:30.370 { 00:22:30.370 "name": "BaseBdev4", 00:22:30.370 "uuid": "e869e99d-8b9e-5156-94e7-ba617743f237", 00:22:30.370 "is_configured": true, 00:22:30.370 "data_offset": 2048, 00:22:30.370 "data_size": 63488 00:22:30.370 } 00:22:30.370 ] 00:22:30.370 }' 00:22:30.370 02:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.370 02:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.938 02:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:30.938 02:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:30.938 [2024-07-11 02:28:21.287662] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2540df0 00:22:31.875 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:32.134 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:32.134 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:32.134 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:32.134 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:32.134 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.134 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.135 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.394 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.394 "name": "raid_bdev1", 00:22:32.394 "uuid": "516bfc0c-85e5-42ce-8ee6-455d330867d8", 00:22:32.394 "strip_size_kb": 64, 00:22:32.394 "state": "online", 00:22:32.394 "raid_level": "raid0", 00:22:32.394 "superblock": true, 00:22:32.394 "num_base_bdevs": 4, 00:22:32.394 "num_base_bdevs_discovered": 4, 00:22:32.394 "num_base_bdevs_operational": 4, 00:22:32.394 "base_bdevs_list": [ 00:22:32.394 { 00:22:32.394 "name": "BaseBdev1", 00:22:32.394 "uuid": "ecd5570b-114e-5127-9a35-e61e112db928", 00:22:32.394 "is_configured": true, 00:22:32.394 "data_offset": 2048, 00:22:32.394 "data_size": 63488 00:22:32.394 }, 00:22:32.394 { 00:22:32.394 "name": "BaseBdev2", 00:22:32.394 "uuid": "a4866f34-f90f-55dc-975f-b9e4acbcb33a", 00:22:32.394 "is_configured": true, 00:22:32.394 "data_offset": 2048, 00:22:32.394 "data_size": 63488 00:22:32.394 }, 00:22:32.394 { 00:22:32.394 "name": "BaseBdev3", 00:22:32.394 "uuid": "c06b5097-abea-589d-b729-a3a5fc02d853", 00:22:32.394 "is_configured": true, 00:22:32.394 "data_offset": 2048, 00:22:32.394 "data_size": 63488 00:22:32.394 }, 00:22:32.394 { 00:22:32.394 "name": "BaseBdev4", 00:22:32.394 "uuid": "e869e99d-8b9e-5156-94e7-ba617743f237", 00:22:32.394 "is_configured": true, 00:22:32.394 "data_offset": 2048, 00:22:32.394 "data_size": 63488 00:22:32.394 } 00:22:32.394 ] 00:22:32.394 }' 00:22:32.394 02:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.394 02:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.962 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:33.222 [2024-07-11 02:28:23.537161] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.222 [2024-07-11 02:28:23.537192] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.222 [2024-07-11 02:28:23.540359] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.222 [2024-07-11 02:28:23.540407] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.222 [2024-07-11 02:28:23.540447] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.222 [2024-07-11 02:28:23.540458] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26ed890 name raid_bdev1, state offline 00:22:33.222 0 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1968822 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1968822 ']' 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1968822 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1968822 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1968822' 00:22:33.222 killing process with pid 1968822 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1968822 00:22:33.222 [2024-07-11 02:28:23.619686] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.222 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1968822 00:22:33.481 [2024-07-11 02:28:23.650242] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.vpU6GysjRa 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:22:33.481 00:22:33.481 real 0m8.069s 00:22:33.481 user 0m13.031s 00:22:33.481 sys 0m1.417s 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.481 02:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.481 ************************************ 00:22:33.481 END TEST raid_read_error_test 00:22:33.481 ************************************ 00:22:33.740 02:28:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:33.740 02:28:23 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:22:33.740 02:28:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:33.740 02:28:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.740 02:28:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.740 ************************************ 00:22:33.740 START TEST raid_write_error_test 00:22:33.740 ************************************ 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:33.740 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6riar2WPXk 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1971431 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1971431 /var/tmp/spdk-raid.sock 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1971431 ']' 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:33.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.741 02:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.741 [2024-07-11 02:28:24.028001] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:33.741 [2024-07-11 02:28:24.028070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971431 ] 00:22:34.000 [2024-07-11 02:28:24.167493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.000 [2024-07-11 02:28:24.220805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.000 [2024-07-11 02:28:24.284732] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.000 [2024-07-11 02:28:24.284754] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.567 02:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.567 02:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:34.567 02:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:34.567 02:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:34.826 BaseBdev1_malloc 00:22:34.826 02:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:35.086 true 00:22:35.086 02:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:35.345 [2024-07-11 02:28:25.608060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:35.345 [2024-07-11 02:28:25.608101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.345 [2024-07-11 02:28:25.608121] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f48330 00:22:35.345 [2024-07-11 02:28:25.608133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.345 [2024-07-11 02:28:25.609926] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.345 [2024-07-11 02:28:25.609953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:35.345 BaseBdev1 00:22:35.345 02:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:35.345 02:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:35.913 BaseBdev2_malloc 00:22:35.913 02:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:36.481 true 00:22:36.481 02:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:36.740 [2024-07-11 02:28:27.161859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:36.740 [2024-07-11 02:28:27.161904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.740 [2024-07-11 02:28:27.161924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f41b40 00:22:36.740 [2024-07-11 02:28:27.161936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.740 [2024-07-11 02:28:27.163503] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.740 [2024-07-11 02:28:27.163529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:37.000 BaseBdev2 00:22:37.000 02:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:37.000 02:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:37.568 BaseBdev3_malloc 00:22:37.568 02:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:37.828 true 00:22:37.828 02:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:38.086 [2024-07-11 02:28:28.390764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:38.087 [2024-07-11 02:28:28.390807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.087 [2024-07-11 02:28:28.390827] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f450f0 00:22:38.087 [2024-07-11 02:28:28.390840] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.087 [2024-07-11 02:28:28.392257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.087 [2024-07-11 02:28:28.392284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:38.087 BaseBdev3 00:22:38.087 02:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:38.087 02:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:38.346 BaseBdev4_malloc 00:22:38.346 02:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:38.916 true 00:22:38.916 02:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:39.483 [2024-07-11 02:28:29.694782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:39.484 [2024-07-11 02:28:29.694826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.484 [2024-07-11 02:28:29.694850] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d964c0 00:22:39.484 [2024-07-11 02:28:29.694863] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.484 [2024-07-11 02:28:29.696415] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.484 [2024-07-11 02:28:29.696455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:39.484 BaseBdev4 00:22:39.484 02:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:40.052 [2024-07-11 02:28:30.212166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.052 [2024-07-11 02:28:30.213537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.052 [2024-07-11 02:28:30.213604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.052 [2024-07-11 02:28:30.213663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:40.052 [2024-07-11 02:28:30.213893] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f3d890 00:22:40.052 [2024-07-11 02:28:30.213904] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:40.052 [2024-07-11 02:28:30.214100] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f3d860 00:22:40.052 [2024-07-11 02:28:30.214257] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f3d890 00:22:40.052 [2024-07-11 02:28:30.214267] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f3d890 00:22:40.052 [2024-07-11 02:28:30.214371] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.052 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.312 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.312 "name": "raid_bdev1", 00:22:40.312 "uuid": "10645851-9004-4349-bcc0-790f4e6ae20a", 00:22:40.312 "strip_size_kb": 64, 00:22:40.312 "state": "online", 00:22:40.312 "raid_level": "raid0", 00:22:40.312 "superblock": true, 00:22:40.312 "num_base_bdevs": 4, 00:22:40.312 "num_base_bdevs_discovered": 4, 00:22:40.312 "num_base_bdevs_operational": 4, 00:22:40.312 "base_bdevs_list": [ 00:22:40.312 { 00:22:40.312 "name": "BaseBdev1", 00:22:40.312 "uuid": "a35ed8bf-a86f-5ca5-a133-392564ca0d34", 00:22:40.312 "is_configured": true, 00:22:40.312 "data_offset": 2048, 00:22:40.312 "data_size": 63488 00:22:40.312 }, 00:22:40.312 { 00:22:40.312 "name": "BaseBdev2", 00:22:40.312 "uuid": "8c56ec6e-bb71-5324-8676-af5ed35a17db", 00:22:40.312 "is_configured": true, 00:22:40.312 "data_offset": 2048, 00:22:40.312 "data_size": 63488 00:22:40.312 }, 00:22:40.312 { 00:22:40.312 "name": "BaseBdev3", 00:22:40.312 "uuid": "da0da666-16a2-5b12-a964-02054f3af6af", 00:22:40.312 "is_configured": true, 00:22:40.312 "data_offset": 2048, 00:22:40.312 "data_size": 63488 00:22:40.312 }, 00:22:40.312 { 00:22:40.312 "name": "BaseBdev4", 00:22:40.312 "uuid": "b840101a-28f4-5878-acab-cfa79f5291f8", 00:22:40.312 "is_configured": true, 00:22:40.312 "data_offset": 2048, 00:22:40.312 "data_size": 63488 00:22:40.312 } 00:22:40.312 ] 00:22:40.312 }' 00:22:40.312 02:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.312 02:28:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.880 02:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:40.880 02:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:40.880 [2024-07-11 02:28:31.219080] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d90df0 00:22:41.817 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.076 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.335 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.335 "name": "raid_bdev1", 00:22:42.335 "uuid": "10645851-9004-4349-bcc0-790f4e6ae20a", 00:22:42.335 "strip_size_kb": 64, 00:22:42.335 "state": "online", 00:22:42.335 "raid_level": "raid0", 00:22:42.335 "superblock": true, 00:22:42.335 "num_base_bdevs": 4, 00:22:42.335 "num_base_bdevs_discovered": 4, 00:22:42.335 "num_base_bdevs_operational": 4, 00:22:42.335 "base_bdevs_list": [ 00:22:42.335 { 00:22:42.335 "name": "BaseBdev1", 00:22:42.335 "uuid": "a35ed8bf-a86f-5ca5-a133-392564ca0d34", 00:22:42.335 "is_configured": true, 00:22:42.335 "data_offset": 2048, 00:22:42.335 "data_size": 63488 00:22:42.335 }, 00:22:42.335 { 00:22:42.335 "name": "BaseBdev2", 00:22:42.335 "uuid": "8c56ec6e-bb71-5324-8676-af5ed35a17db", 00:22:42.335 "is_configured": true, 00:22:42.335 "data_offset": 2048, 00:22:42.335 "data_size": 63488 00:22:42.335 }, 00:22:42.335 { 00:22:42.335 "name": "BaseBdev3", 00:22:42.335 "uuid": "da0da666-16a2-5b12-a964-02054f3af6af", 00:22:42.335 "is_configured": true, 00:22:42.335 "data_offset": 2048, 00:22:42.335 "data_size": 63488 00:22:42.335 }, 00:22:42.335 { 00:22:42.335 "name": "BaseBdev4", 00:22:42.335 "uuid": "b840101a-28f4-5878-acab-cfa79f5291f8", 00:22:42.335 "is_configured": true, 00:22:42.335 "data_offset": 2048, 00:22:42.335 "data_size": 63488 00:22:42.335 } 00:22:42.335 ] 00:22:42.335 }' 00:22:42.335 02:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.335 02:28:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.903 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:43.162 [2024-07-11 02:28:33.520708] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.162 [2024-07-11 02:28:33.520743] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.162 [2024-07-11 02:28:33.524103] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.162 [2024-07-11 02:28:33.524149] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.162 [2024-07-11 02:28:33.524188] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.162 [2024-07-11 02:28:33.524204] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f3d890 name raid_bdev1, state offline 00:22:43.162 0 00:22:43.162 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1971431 00:22:43.162 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1971431 ']' 00:22:43.162 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1971431 00:22:43.162 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:43.162 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.162 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1971431 00:22:43.420 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:43.420 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:43.420 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1971431' 00:22:43.420 killing process with pid 1971431 00:22:43.420 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1971431 00:22:43.420 [2024-07-11 02:28:33.602861] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:43.420 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1971431 00:22:43.420 [2024-07-11 02:28:33.638149] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6riar2WPXk 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:22:43.679 00:22:43.679 real 0m9.911s 00:22:43.679 user 0m16.372s 00:22:43.679 sys 0m1.698s 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.679 02:28:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.679 ************************************ 00:22:43.679 END TEST raid_write_error_test 00:22:43.679 ************************************ 00:22:43.679 02:28:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:43.679 02:28:33 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:43.679 02:28:33 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:22:43.679 02:28:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:43.679 02:28:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.679 02:28:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:43.679 ************************************ 00:22:43.679 START TEST raid_state_function_test 00:22:43.679 ************************************ 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.679 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1973218 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1973218' 00:22:43.680 Process raid pid: 1973218 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1973218 /var/tmp/spdk-raid.sock 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1973218 ']' 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:43.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.680 02:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.680 [2024-07-11 02:28:34.020225] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:43.680 [2024-07-11 02:28:34.020301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.939 [2024-07-11 02:28:34.160229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.939 [2024-07-11 02:28:34.215396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.939 [2024-07-11 02:28:34.274437] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:43.939 [2024-07-11 02:28:34.274463] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:44.507 02:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.507 02:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:22:44.507 02:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:44.766 [2024-07-11 02:28:35.109968] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.766 [2024-07-11 02:28:35.110010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.766 [2024-07-11 02:28:35.110021] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.766 [2024-07-11 02:28:35.110033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.766 [2024-07-11 02:28:35.110042] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.766 [2024-07-11 02:28:35.110053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.766 [2024-07-11 02:28:35.110061] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:44.766 [2024-07-11 02:28:35.110072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.766 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.026 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.026 "name": "Existed_Raid", 00:22:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.026 "strip_size_kb": 64, 00:22:45.026 "state": "configuring", 00:22:45.026 "raid_level": "concat", 00:22:45.026 "superblock": false, 00:22:45.026 "num_base_bdevs": 4, 00:22:45.026 "num_base_bdevs_discovered": 0, 00:22:45.026 "num_base_bdevs_operational": 4, 00:22:45.026 "base_bdevs_list": [ 00:22:45.026 { 00:22:45.026 "name": "BaseBdev1", 00:22:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.026 "is_configured": false, 00:22:45.026 "data_offset": 0, 00:22:45.026 "data_size": 0 00:22:45.026 }, 00:22:45.026 { 00:22:45.026 "name": "BaseBdev2", 00:22:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.026 "is_configured": false, 00:22:45.026 "data_offset": 0, 00:22:45.026 "data_size": 0 00:22:45.026 }, 00:22:45.026 { 00:22:45.026 "name": "BaseBdev3", 00:22:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.026 "is_configured": false, 00:22:45.026 "data_offset": 0, 00:22:45.026 "data_size": 0 00:22:45.026 }, 00:22:45.026 { 00:22:45.026 "name": "BaseBdev4", 00:22:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.026 "is_configured": false, 00:22:45.026 "data_offset": 0, 00:22:45.026 "data_size": 0 00:22:45.026 } 00:22:45.026 ] 00:22:45.026 }' 00:22:45.026 02:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.026 02:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.963 02:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:45.963 [2024-07-11 02:28:36.280937] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:45.963 [2024-07-11 02:28:36.280971] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e2e5a0 name Existed_Raid, state configuring 00:22:45.963 02:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:46.222 [2024-07-11 02:28:36.521585] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:46.222 [2024-07-11 02:28:36.521614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:46.222 [2024-07-11 02:28:36.521623] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.222 [2024-07-11 02:28:36.521635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.222 [2024-07-11 02:28:36.521643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.222 [2024-07-11 02:28:36.521654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.222 [2024-07-11 02:28:36.521663] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:46.222 [2024-07-11 02:28:36.521674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:46.222 02:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:46.481 [2024-07-11 02:28:36.776587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.481 BaseBdev1 00:22:46.481 02:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:46.481 02:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:46.481 02:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:46.481 02:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:46.481 02:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:46.481 02:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:46.481 02:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.740 02:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:46.999 [ 00:22:46.999 { 00:22:46.999 "name": "BaseBdev1", 00:22:46.999 "aliases": [ 00:22:46.999 "de133773-52a9-41af-b669-3d5c98c48304" 00:22:46.999 ], 00:22:46.999 "product_name": "Malloc disk", 00:22:46.999 "block_size": 512, 00:22:46.999 "num_blocks": 65536, 00:22:46.999 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:46.999 "assigned_rate_limits": { 00:22:46.999 "rw_ios_per_sec": 0, 00:22:46.999 "rw_mbytes_per_sec": 0, 00:22:46.999 "r_mbytes_per_sec": 0, 00:22:46.999 "w_mbytes_per_sec": 0 00:22:46.999 }, 00:22:46.999 "claimed": true, 00:22:46.999 "claim_type": "exclusive_write", 00:22:46.999 "zoned": false, 00:22:46.999 "supported_io_types": { 00:22:46.999 "read": true, 00:22:46.999 "write": true, 00:22:46.999 "unmap": true, 00:22:46.999 "flush": true, 00:22:46.999 "reset": true, 00:22:46.999 "nvme_admin": false, 00:22:46.999 "nvme_io": false, 00:22:46.999 "nvme_io_md": false, 00:22:46.999 "write_zeroes": true, 00:22:46.999 "zcopy": true, 00:22:46.999 "get_zone_info": false, 00:22:46.999 "zone_management": false, 00:22:46.999 "zone_append": false, 00:22:46.999 "compare": false, 00:22:46.999 "compare_and_write": false, 00:22:46.999 "abort": true, 00:22:46.999 "seek_hole": false, 00:22:46.999 "seek_data": false, 00:22:46.999 "copy": true, 00:22:46.999 "nvme_iov_md": false 00:22:46.999 }, 00:22:46.999 "memory_domains": [ 00:22:46.999 { 00:22:46.999 "dma_device_id": "system", 00:22:46.999 "dma_device_type": 1 00:22:46.999 }, 00:22:46.999 { 00:22:46.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.999 "dma_device_type": 2 00:22:46.999 } 00:22:46.999 ], 00:22:46.999 "driver_specific": {} 00:22:46.999 } 00:22:46.999 ] 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.999 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.258 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.258 "name": "Existed_Raid", 00:22:47.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.258 "strip_size_kb": 64, 00:22:47.258 "state": "configuring", 00:22:47.258 "raid_level": "concat", 00:22:47.258 "superblock": false, 00:22:47.258 "num_base_bdevs": 4, 00:22:47.258 "num_base_bdevs_discovered": 1, 00:22:47.258 "num_base_bdevs_operational": 4, 00:22:47.258 "base_bdevs_list": [ 00:22:47.258 { 00:22:47.258 "name": "BaseBdev1", 00:22:47.258 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:47.258 "is_configured": true, 00:22:47.258 "data_offset": 0, 00:22:47.258 "data_size": 65536 00:22:47.258 }, 00:22:47.258 { 00:22:47.258 "name": "BaseBdev2", 00:22:47.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.258 "is_configured": false, 00:22:47.258 "data_offset": 0, 00:22:47.258 "data_size": 0 00:22:47.258 }, 00:22:47.258 { 00:22:47.258 "name": "BaseBdev3", 00:22:47.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.258 "is_configured": false, 00:22:47.258 "data_offset": 0, 00:22:47.258 "data_size": 0 00:22:47.258 }, 00:22:47.258 { 00:22:47.258 "name": "BaseBdev4", 00:22:47.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.258 "is_configured": false, 00:22:47.258 "data_offset": 0, 00:22:47.258 "data_size": 0 00:22:47.258 } 00:22:47.258 ] 00:22:47.258 }' 00:22:47.258 02:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.258 02:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.825 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:48.083 [2024-07-11 02:28:38.256542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:48.083 [2024-07-11 02:28:38.256582] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e2ded0 name Existed_Raid, state configuring 00:22:48.083 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:48.342 [2024-07-11 02:28:38.517263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.342 [2024-07-11 02:28:38.518681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.342 [2024-07-11 02:28:38.518714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.342 [2024-07-11 02:28:38.518725] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:48.342 [2024-07-11 02:28:38.518736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:48.342 [2024-07-11 02:28:38.518745] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:48.342 [2024-07-11 02:28:38.518769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.342 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.638 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.638 "name": "Existed_Raid", 00:22:48.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.638 "strip_size_kb": 64, 00:22:48.638 "state": "configuring", 00:22:48.638 "raid_level": "concat", 00:22:48.638 "superblock": false, 00:22:48.638 "num_base_bdevs": 4, 00:22:48.638 "num_base_bdevs_discovered": 1, 00:22:48.638 "num_base_bdevs_operational": 4, 00:22:48.638 "base_bdevs_list": [ 00:22:48.638 { 00:22:48.638 "name": "BaseBdev1", 00:22:48.638 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:48.638 "is_configured": true, 00:22:48.638 "data_offset": 0, 00:22:48.638 "data_size": 65536 00:22:48.638 }, 00:22:48.638 { 00:22:48.638 "name": "BaseBdev2", 00:22:48.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.638 "is_configured": false, 00:22:48.638 "data_offset": 0, 00:22:48.638 "data_size": 0 00:22:48.638 }, 00:22:48.638 { 00:22:48.638 "name": "BaseBdev3", 00:22:48.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.638 "is_configured": false, 00:22:48.638 "data_offset": 0, 00:22:48.638 "data_size": 0 00:22:48.638 }, 00:22:48.638 { 00:22:48.638 "name": "BaseBdev4", 00:22:48.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.638 "is_configured": false, 00:22:48.638 "data_offset": 0, 00:22:48.638 "data_size": 0 00:22:48.638 } 00:22:48.638 ] 00:22:48.638 }' 00:22:48.638 02:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.638 02:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.223 02:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.482 [2024-07-11 02:28:39.647547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.482 BaseBdev2 00:22:49.482 02:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:49.482 02:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:49.482 02:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:49.482 02:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:49.482 02:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:49.482 02:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:49.482 02:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.741 02:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:49.999 [ 00:22:49.999 { 00:22:49.999 "name": "BaseBdev2", 00:22:49.999 "aliases": [ 00:22:49.999 "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8" 00:22:49.999 ], 00:22:49.999 "product_name": "Malloc disk", 00:22:49.999 "block_size": 512, 00:22:49.999 "num_blocks": 65536, 00:22:49.999 "uuid": "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8", 00:22:49.999 "assigned_rate_limits": { 00:22:49.999 "rw_ios_per_sec": 0, 00:22:49.999 "rw_mbytes_per_sec": 0, 00:22:49.999 "r_mbytes_per_sec": 0, 00:22:49.999 "w_mbytes_per_sec": 0 00:22:49.999 }, 00:22:49.999 "claimed": true, 00:22:49.999 "claim_type": "exclusive_write", 00:22:49.999 "zoned": false, 00:22:49.999 "supported_io_types": { 00:22:49.999 "read": true, 00:22:49.999 "write": true, 00:22:49.999 "unmap": true, 00:22:49.999 "flush": true, 00:22:49.999 "reset": true, 00:22:49.999 "nvme_admin": false, 00:22:49.999 "nvme_io": false, 00:22:49.999 "nvme_io_md": false, 00:22:49.999 "write_zeroes": true, 00:22:49.999 "zcopy": true, 00:22:49.999 "get_zone_info": false, 00:22:49.999 "zone_management": false, 00:22:49.999 "zone_append": false, 00:22:49.999 "compare": false, 00:22:49.999 "compare_and_write": false, 00:22:49.999 "abort": true, 00:22:49.999 "seek_hole": false, 00:22:49.999 "seek_data": false, 00:22:49.999 "copy": true, 00:22:49.999 "nvme_iov_md": false 00:22:49.999 }, 00:22:49.999 "memory_domains": [ 00:22:49.999 { 00:22:49.999 "dma_device_id": "system", 00:22:49.999 "dma_device_type": 1 00:22:49.999 }, 00:22:49.999 { 00:22:49.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.000 "dma_device_type": 2 00:22:50.000 } 00:22:50.000 ], 00:22:50.000 "driver_specific": {} 00:22:50.000 } 00:22:50.000 ] 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.000 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.258 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.258 "name": "Existed_Raid", 00:22:50.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.258 "strip_size_kb": 64, 00:22:50.258 "state": "configuring", 00:22:50.258 "raid_level": "concat", 00:22:50.258 "superblock": false, 00:22:50.258 "num_base_bdevs": 4, 00:22:50.258 "num_base_bdevs_discovered": 2, 00:22:50.258 "num_base_bdevs_operational": 4, 00:22:50.258 "base_bdevs_list": [ 00:22:50.258 { 00:22:50.258 "name": "BaseBdev1", 00:22:50.258 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:50.258 "is_configured": true, 00:22:50.258 "data_offset": 0, 00:22:50.258 "data_size": 65536 00:22:50.258 }, 00:22:50.258 { 00:22:50.258 "name": "BaseBdev2", 00:22:50.258 "uuid": "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8", 00:22:50.258 "is_configured": true, 00:22:50.258 "data_offset": 0, 00:22:50.258 "data_size": 65536 00:22:50.258 }, 00:22:50.258 { 00:22:50.258 "name": "BaseBdev3", 00:22:50.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.258 "is_configured": false, 00:22:50.258 "data_offset": 0, 00:22:50.258 "data_size": 0 00:22:50.258 }, 00:22:50.258 { 00:22:50.258 "name": "BaseBdev4", 00:22:50.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.258 "is_configured": false, 00:22:50.258 "data_offset": 0, 00:22:50.258 "data_size": 0 00:22:50.258 } 00:22:50.258 ] 00:22:50.258 }' 00:22:50.258 02:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.258 02:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.826 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:51.085 [2024-07-11 02:28:41.295373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.085 BaseBdev3 00:22:51.085 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:51.085 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:51.085 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:51.085 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:51.085 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:51.085 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:51.085 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:51.344 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:51.629 [ 00:22:51.629 { 00:22:51.629 "name": "BaseBdev3", 00:22:51.629 "aliases": [ 00:22:51.629 "357e19f5-25d6-425d-a434-66e259c14fcb" 00:22:51.629 ], 00:22:51.629 "product_name": "Malloc disk", 00:22:51.629 "block_size": 512, 00:22:51.629 "num_blocks": 65536, 00:22:51.630 "uuid": "357e19f5-25d6-425d-a434-66e259c14fcb", 00:22:51.630 "assigned_rate_limits": { 00:22:51.630 "rw_ios_per_sec": 0, 00:22:51.630 "rw_mbytes_per_sec": 0, 00:22:51.630 "r_mbytes_per_sec": 0, 00:22:51.630 "w_mbytes_per_sec": 0 00:22:51.630 }, 00:22:51.630 "claimed": true, 00:22:51.630 "claim_type": "exclusive_write", 00:22:51.630 "zoned": false, 00:22:51.630 "supported_io_types": { 00:22:51.630 "read": true, 00:22:51.630 "write": true, 00:22:51.630 "unmap": true, 00:22:51.630 "flush": true, 00:22:51.630 "reset": true, 00:22:51.630 "nvme_admin": false, 00:22:51.630 "nvme_io": false, 00:22:51.630 "nvme_io_md": false, 00:22:51.630 "write_zeroes": true, 00:22:51.630 "zcopy": true, 00:22:51.630 "get_zone_info": false, 00:22:51.630 "zone_management": false, 00:22:51.630 "zone_append": false, 00:22:51.630 "compare": false, 00:22:51.630 "compare_and_write": false, 00:22:51.630 "abort": true, 00:22:51.630 "seek_hole": false, 00:22:51.630 "seek_data": false, 00:22:51.630 "copy": true, 00:22:51.630 "nvme_iov_md": false 00:22:51.630 }, 00:22:51.630 "memory_domains": [ 00:22:51.630 { 00:22:51.630 "dma_device_id": "system", 00:22:51.630 "dma_device_type": 1 00:22:51.630 }, 00:22:51.630 { 00:22:51.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.630 "dma_device_type": 2 00:22:51.630 } 00:22:51.630 ], 00:22:51.630 "driver_specific": {} 00:22:51.630 } 00:22:51.630 ] 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.630 02:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.888 02:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.888 "name": "Existed_Raid", 00:22:51.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.888 "strip_size_kb": 64, 00:22:51.888 "state": "configuring", 00:22:51.888 "raid_level": "concat", 00:22:51.888 "superblock": false, 00:22:51.888 "num_base_bdevs": 4, 00:22:51.888 "num_base_bdevs_discovered": 3, 00:22:51.888 "num_base_bdevs_operational": 4, 00:22:51.888 "base_bdevs_list": [ 00:22:51.888 { 00:22:51.888 "name": "BaseBdev1", 00:22:51.888 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:51.888 "is_configured": true, 00:22:51.888 "data_offset": 0, 00:22:51.888 "data_size": 65536 00:22:51.888 }, 00:22:51.888 { 00:22:51.888 "name": "BaseBdev2", 00:22:51.888 "uuid": "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8", 00:22:51.888 "is_configured": true, 00:22:51.888 "data_offset": 0, 00:22:51.889 "data_size": 65536 00:22:51.889 }, 00:22:51.889 { 00:22:51.889 "name": "BaseBdev3", 00:22:51.889 "uuid": "357e19f5-25d6-425d-a434-66e259c14fcb", 00:22:51.889 "is_configured": true, 00:22:51.889 "data_offset": 0, 00:22:51.889 "data_size": 65536 00:22:51.889 }, 00:22:51.889 { 00:22:51.889 "name": "BaseBdev4", 00:22:51.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.889 "is_configured": false, 00:22:51.889 "data_offset": 0, 00:22:51.889 "data_size": 0 00:22:51.889 } 00:22:51.889 ] 00:22:51.889 }' 00:22:51.889 02:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.889 02:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:52.457 [2024-07-11 02:28:42.738529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:52.457 [2024-07-11 02:28:42.738564] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fe0d70 00:22:52.457 [2024-07-11 02:28:42.738573] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:52.457 [2024-07-11 02:28:42.738846] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e362a0 00:22:52.457 [2024-07-11 02:28:42.738972] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fe0d70 00:22:52.457 [2024-07-11 02:28:42.738982] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1fe0d70 00:22:52.457 [2024-07-11 02:28:42.739145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.457 BaseBdev4 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:52.457 02:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:52.715 02:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:52.974 [ 00:22:52.974 { 00:22:52.974 "name": "BaseBdev4", 00:22:52.974 "aliases": [ 00:22:52.974 "2ec11f8a-e6cd-4315-883b-a760e7c1da78" 00:22:52.974 ], 00:22:52.974 "product_name": "Malloc disk", 00:22:52.974 "block_size": 512, 00:22:52.974 "num_blocks": 65536, 00:22:52.974 "uuid": "2ec11f8a-e6cd-4315-883b-a760e7c1da78", 00:22:52.974 "assigned_rate_limits": { 00:22:52.974 "rw_ios_per_sec": 0, 00:22:52.974 "rw_mbytes_per_sec": 0, 00:22:52.974 "r_mbytes_per_sec": 0, 00:22:52.974 "w_mbytes_per_sec": 0 00:22:52.974 }, 00:22:52.974 "claimed": true, 00:22:52.974 "claim_type": "exclusive_write", 00:22:52.974 "zoned": false, 00:22:52.974 "supported_io_types": { 00:22:52.974 "read": true, 00:22:52.974 "write": true, 00:22:52.974 "unmap": true, 00:22:52.974 "flush": true, 00:22:52.974 "reset": true, 00:22:52.974 "nvme_admin": false, 00:22:52.974 "nvme_io": false, 00:22:52.974 "nvme_io_md": false, 00:22:52.974 "write_zeroes": true, 00:22:52.974 "zcopy": true, 00:22:52.974 "get_zone_info": false, 00:22:52.974 "zone_management": false, 00:22:52.974 "zone_append": false, 00:22:52.974 "compare": false, 00:22:52.974 "compare_and_write": false, 00:22:52.974 "abort": true, 00:22:52.974 "seek_hole": false, 00:22:52.974 "seek_data": false, 00:22:52.974 "copy": true, 00:22:52.974 "nvme_iov_md": false 00:22:52.974 }, 00:22:52.974 "memory_domains": [ 00:22:52.974 { 00:22:52.974 "dma_device_id": "system", 00:22:52.974 "dma_device_type": 1 00:22:52.974 }, 00:22:52.974 { 00:22:52.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.974 "dma_device_type": 2 00:22:52.974 } 00:22:52.974 ], 00:22:52.974 "driver_specific": {} 00:22:52.974 } 00:22:52.974 ] 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.974 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.244 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.244 "name": "Existed_Raid", 00:22:53.244 "uuid": "499371b2-a7e1-4dd3-9b4d-1992131130e5", 00:22:53.244 "strip_size_kb": 64, 00:22:53.244 "state": "online", 00:22:53.244 "raid_level": "concat", 00:22:53.244 "superblock": false, 00:22:53.244 "num_base_bdevs": 4, 00:22:53.244 "num_base_bdevs_discovered": 4, 00:22:53.244 "num_base_bdevs_operational": 4, 00:22:53.244 "base_bdevs_list": [ 00:22:53.244 { 00:22:53.244 "name": "BaseBdev1", 00:22:53.244 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:53.244 "is_configured": true, 00:22:53.244 "data_offset": 0, 00:22:53.244 "data_size": 65536 00:22:53.244 }, 00:22:53.244 { 00:22:53.244 "name": "BaseBdev2", 00:22:53.244 "uuid": "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8", 00:22:53.244 "is_configured": true, 00:22:53.244 "data_offset": 0, 00:22:53.244 "data_size": 65536 00:22:53.244 }, 00:22:53.244 { 00:22:53.244 "name": "BaseBdev3", 00:22:53.244 "uuid": "357e19f5-25d6-425d-a434-66e259c14fcb", 00:22:53.244 "is_configured": true, 00:22:53.244 "data_offset": 0, 00:22:53.244 "data_size": 65536 00:22:53.244 }, 00:22:53.244 { 00:22:53.244 "name": "BaseBdev4", 00:22:53.244 "uuid": "2ec11f8a-e6cd-4315-883b-a760e7c1da78", 00:22:53.244 "is_configured": true, 00:22:53.244 "data_offset": 0, 00:22:53.244 "data_size": 65536 00:22:53.244 } 00:22:53.244 ] 00:22:53.244 }' 00:22:53.244 02:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.244 02:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:53.812 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:54.070 [2024-07-11 02:28:44.339135] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.071 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:54.071 "name": "Existed_Raid", 00:22:54.071 "aliases": [ 00:22:54.071 "499371b2-a7e1-4dd3-9b4d-1992131130e5" 00:22:54.071 ], 00:22:54.071 "product_name": "Raid Volume", 00:22:54.071 "block_size": 512, 00:22:54.071 "num_blocks": 262144, 00:22:54.071 "uuid": "499371b2-a7e1-4dd3-9b4d-1992131130e5", 00:22:54.071 "assigned_rate_limits": { 00:22:54.071 "rw_ios_per_sec": 0, 00:22:54.071 "rw_mbytes_per_sec": 0, 00:22:54.071 "r_mbytes_per_sec": 0, 00:22:54.071 "w_mbytes_per_sec": 0 00:22:54.071 }, 00:22:54.071 "claimed": false, 00:22:54.071 "zoned": false, 00:22:54.071 "supported_io_types": { 00:22:54.071 "read": true, 00:22:54.071 "write": true, 00:22:54.071 "unmap": true, 00:22:54.071 "flush": true, 00:22:54.071 "reset": true, 00:22:54.071 "nvme_admin": false, 00:22:54.071 "nvme_io": false, 00:22:54.071 "nvme_io_md": false, 00:22:54.071 "write_zeroes": true, 00:22:54.071 "zcopy": false, 00:22:54.071 "get_zone_info": false, 00:22:54.071 "zone_management": false, 00:22:54.071 "zone_append": false, 00:22:54.071 "compare": false, 00:22:54.071 "compare_and_write": false, 00:22:54.071 "abort": false, 00:22:54.071 "seek_hole": false, 00:22:54.071 "seek_data": false, 00:22:54.071 "copy": false, 00:22:54.071 "nvme_iov_md": false 00:22:54.071 }, 00:22:54.071 "memory_domains": [ 00:22:54.071 { 00:22:54.071 "dma_device_id": "system", 00:22:54.071 "dma_device_type": 1 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.071 "dma_device_type": 2 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "system", 00:22:54.071 "dma_device_type": 1 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.071 "dma_device_type": 2 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "system", 00:22:54.071 "dma_device_type": 1 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.071 "dma_device_type": 2 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "system", 00:22:54.071 "dma_device_type": 1 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.071 "dma_device_type": 2 00:22:54.071 } 00:22:54.071 ], 00:22:54.071 "driver_specific": { 00:22:54.071 "raid": { 00:22:54.071 "uuid": "499371b2-a7e1-4dd3-9b4d-1992131130e5", 00:22:54.071 "strip_size_kb": 64, 00:22:54.071 "state": "online", 00:22:54.071 "raid_level": "concat", 00:22:54.071 "superblock": false, 00:22:54.071 "num_base_bdevs": 4, 00:22:54.071 "num_base_bdevs_discovered": 4, 00:22:54.071 "num_base_bdevs_operational": 4, 00:22:54.071 "base_bdevs_list": [ 00:22:54.071 { 00:22:54.071 "name": "BaseBdev1", 00:22:54.071 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:54.071 "is_configured": true, 00:22:54.071 "data_offset": 0, 00:22:54.071 "data_size": 65536 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "name": "BaseBdev2", 00:22:54.071 "uuid": "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8", 00:22:54.071 "is_configured": true, 00:22:54.071 "data_offset": 0, 00:22:54.071 "data_size": 65536 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "name": "BaseBdev3", 00:22:54.071 "uuid": "357e19f5-25d6-425d-a434-66e259c14fcb", 00:22:54.071 "is_configured": true, 00:22:54.071 "data_offset": 0, 00:22:54.071 "data_size": 65536 00:22:54.071 }, 00:22:54.071 { 00:22:54.071 "name": "BaseBdev4", 00:22:54.071 "uuid": "2ec11f8a-e6cd-4315-883b-a760e7c1da78", 00:22:54.071 "is_configured": true, 00:22:54.071 "data_offset": 0, 00:22:54.071 "data_size": 65536 00:22:54.071 } 00:22:54.071 ] 00:22:54.071 } 00:22:54.071 } 00:22:54.071 }' 00:22:54.071 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.071 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:54.071 BaseBdev2 00:22:54.071 BaseBdev3 00:22:54.071 BaseBdev4' 00:22:54.071 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.071 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:54.071 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.330 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.330 "name": "BaseBdev1", 00:22:54.330 "aliases": [ 00:22:54.330 "de133773-52a9-41af-b669-3d5c98c48304" 00:22:54.330 ], 00:22:54.330 "product_name": "Malloc disk", 00:22:54.330 "block_size": 512, 00:22:54.330 "num_blocks": 65536, 00:22:54.330 "uuid": "de133773-52a9-41af-b669-3d5c98c48304", 00:22:54.330 "assigned_rate_limits": { 00:22:54.330 "rw_ios_per_sec": 0, 00:22:54.330 "rw_mbytes_per_sec": 0, 00:22:54.330 "r_mbytes_per_sec": 0, 00:22:54.330 "w_mbytes_per_sec": 0 00:22:54.330 }, 00:22:54.330 "claimed": true, 00:22:54.330 "claim_type": "exclusive_write", 00:22:54.330 "zoned": false, 00:22:54.330 "supported_io_types": { 00:22:54.330 "read": true, 00:22:54.330 "write": true, 00:22:54.330 "unmap": true, 00:22:54.330 "flush": true, 00:22:54.330 "reset": true, 00:22:54.330 "nvme_admin": false, 00:22:54.330 "nvme_io": false, 00:22:54.330 "nvme_io_md": false, 00:22:54.330 "write_zeroes": true, 00:22:54.330 "zcopy": true, 00:22:54.330 "get_zone_info": false, 00:22:54.330 "zone_management": false, 00:22:54.330 "zone_append": false, 00:22:54.330 "compare": false, 00:22:54.330 "compare_and_write": false, 00:22:54.330 "abort": true, 00:22:54.330 "seek_hole": false, 00:22:54.330 "seek_data": false, 00:22:54.330 "copy": true, 00:22:54.330 "nvme_iov_md": false 00:22:54.330 }, 00:22:54.330 "memory_domains": [ 00:22:54.330 { 00:22:54.330 "dma_device_id": "system", 00:22:54.330 "dma_device_type": 1 00:22:54.330 }, 00:22:54.330 { 00:22:54.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.330 "dma_device_type": 2 00:22:54.330 } 00:22:54.330 ], 00:22:54.330 "driver_specific": {} 00:22:54.330 }' 00:22:54.330 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.330 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.588 02:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.588 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.846 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.846 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.846 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:54.846 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:55.105 "name": "BaseBdev2", 00:22:55.105 "aliases": [ 00:22:55.105 "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8" 00:22:55.105 ], 00:22:55.105 "product_name": "Malloc disk", 00:22:55.105 "block_size": 512, 00:22:55.105 "num_blocks": 65536, 00:22:55.105 "uuid": "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8", 00:22:55.105 "assigned_rate_limits": { 00:22:55.105 "rw_ios_per_sec": 0, 00:22:55.105 "rw_mbytes_per_sec": 0, 00:22:55.105 "r_mbytes_per_sec": 0, 00:22:55.105 "w_mbytes_per_sec": 0 00:22:55.105 }, 00:22:55.105 "claimed": true, 00:22:55.105 "claim_type": "exclusive_write", 00:22:55.105 "zoned": false, 00:22:55.105 "supported_io_types": { 00:22:55.105 "read": true, 00:22:55.105 "write": true, 00:22:55.105 "unmap": true, 00:22:55.105 "flush": true, 00:22:55.105 "reset": true, 00:22:55.105 "nvme_admin": false, 00:22:55.105 "nvme_io": false, 00:22:55.105 "nvme_io_md": false, 00:22:55.105 "write_zeroes": true, 00:22:55.105 "zcopy": true, 00:22:55.105 "get_zone_info": false, 00:22:55.105 "zone_management": false, 00:22:55.105 "zone_append": false, 00:22:55.105 "compare": false, 00:22:55.105 "compare_and_write": false, 00:22:55.105 "abort": true, 00:22:55.105 "seek_hole": false, 00:22:55.105 "seek_data": false, 00:22:55.105 "copy": true, 00:22:55.105 "nvme_iov_md": false 00:22:55.105 }, 00:22:55.105 "memory_domains": [ 00:22:55.105 { 00:22:55.105 "dma_device_id": "system", 00:22:55.105 "dma_device_type": 1 00:22:55.105 }, 00:22:55.105 { 00:22:55.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.105 "dma_device_type": 2 00:22:55.105 } 00:22:55.105 ], 00:22:55.105 "driver_specific": {} 00:22:55.105 }' 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.105 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.363 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.363 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.363 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.363 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.363 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.363 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:55.363 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:55.622 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:55.622 "name": "BaseBdev3", 00:22:55.622 "aliases": [ 00:22:55.622 "357e19f5-25d6-425d-a434-66e259c14fcb" 00:22:55.622 ], 00:22:55.622 "product_name": "Malloc disk", 00:22:55.622 "block_size": 512, 00:22:55.622 "num_blocks": 65536, 00:22:55.622 "uuid": "357e19f5-25d6-425d-a434-66e259c14fcb", 00:22:55.622 "assigned_rate_limits": { 00:22:55.622 "rw_ios_per_sec": 0, 00:22:55.622 "rw_mbytes_per_sec": 0, 00:22:55.622 "r_mbytes_per_sec": 0, 00:22:55.622 "w_mbytes_per_sec": 0 00:22:55.622 }, 00:22:55.622 "claimed": true, 00:22:55.622 "claim_type": "exclusive_write", 00:22:55.622 "zoned": false, 00:22:55.622 "supported_io_types": { 00:22:55.622 "read": true, 00:22:55.622 "write": true, 00:22:55.622 "unmap": true, 00:22:55.622 "flush": true, 00:22:55.622 "reset": true, 00:22:55.622 "nvme_admin": false, 00:22:55.622 "nvme_io": false, 00:22:55.622 "nvme_io_md": false, 00:22:55.622 "write_zeroes": true, 00:22:55.622 "zcopy": true, 00:22:55.622 "get_zone_info": false, 00:22:55.622 "zone_management": false, 00:22:55.622 "zone_append": false, 00:22:55.622 "compare": false, 00:22:55.622 "compare_and_write": false, 00:22:55.622 "abort": true, 00:22:55.622 "seek_hole": false, 00:22:55.622 "seek_data": false, 00:22:55.622 "copy": true, 00:22:55.622 "nvme_iov_md": false 00:22:55.622 }, 00:22:55.622 "memory_domains": [ 00:22:55.622 { 00:22:55.622 "dma_device_id": "system", 00:22:55.622 "dma_device_type": 1 00:22:55.622 }, 00:22:55.622 { 00:22:55.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.622 "dma_device_type": 2 00:22:55.622 } 00:22:55.622 ], 00:22:55.622 "driver_specific": {} 00:22:55.622 }' 00:22:55.622 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.622 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.622 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.622 02:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.622 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:55.881 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:56.140 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:56.140 "name": "BaseBdev4", 00:22:56.140 "aliases": [ 00:22:56.140 "2ec11f8a-e6cd-4315-883b-a760e7c1da78" 00:22:56.140 ], 00:22:56.140 "product_name": "Malloc disk", 00:22:56.140 "block_size": 512, 00:22:56.140 "num_blocks": 65536, 00:22:56.140 "uuid": "2ec11f8a-e6cd-4315-883b-a760e7c1da78", 00:22:56.140 "assigned_rate_limits": { 00:22:56.140 "rw_ios_per_sec": 0, 00:22:56.140 "rw_mbytes_per_sec": 0, 00:22:56.140 "r_mbytes_per_sec": 0, 00:22:56.140 "w_mbytes_per_sec": 0 00:22:56.140 }, 00:22:56.140 "claimed": true, 00:22:56.140 "claim_type": "exclusive_write", 00:22:56.140 "zoned": false, 00:22:56.140 "supported_io_types": { 00:22:56.140 "read": true, 00:22:56.140 "write": true, 00:22:56.140 "unmap": true, 00:22:56.140 "flush": true, 00:22:56.140 "reset": true, 00:22:56.140 "nvme_admin": false, 00:22:56.140 "nvme_io": false, 00:22:56.140 "nvme_io_md": false, 00:22:56.140 "write_zeroes": true, 00:22:56.140 "zcopy": true, 00:22:56.140 "get_zone_info": false, 00:22:56.140 "zone_management": false, 00:22:56.140 "zone_append": false, 00:22:56.141 "compare": false, 00:22:56.141 "compare_and_write": false, 00:22:56.141 "abort": true, 00:22:56.141 "seek_hole": false, 00:22:56.141 "seek_data": false, 00:22:56.141 "copy": true, 00:22:56.141 "nvme_iov_md": false 00:22:56.141 }, 00:22:56.141 "memory_domains": [ 00:22:56.141 { 00:22:56.141 "dma_device_id": "system", 00:22:56.141 "dma_device_type": 1 00:22:56.141 }, 00:22:56.141 { 00:22:56.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.141 "dma_device_type": 2 00:22:56.141 } 00:22:56.141 ], 00:22:56.141 "driver_specific": {} 00:22:56.141 }' 00:22:56.141 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.141 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.400 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.659 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:56.659 02:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:56.659 [2024-07-11 02:28:47.062092] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:56.659 [2024-07-11 02:28:47.062121] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:56.659 [2024-07-11 02:28:47.062170] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:56.659 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:56.659 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.918 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.918 "name": "Existed_Raid", 00:22:56.918 "uuid": "499371b2-a7e1-4dd3-9b4d-1992131130e5", 00:22:56.918 "strip_size_kb": 64, 00:22:56.918 "state": "offline", 00:22:56.918 "raid_level": "concat", 00:22:56.918 "superblock": false, 00:22:56.919 "num_base_bdevs": 4, 00:22:56.919 "num_base_bdevs_discovered": 3, 00:22:56.919 "num_base_bdevs_operational": 3, 00:22:56.919 "base_bdevs_list": [ 00:22:56.919 { 00:22:56.919 "name": null, 00:22:56.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.919 "is_configured": false, 00:22:56.919 "data_offset": 0, 00:22:56.919 "data_size": 65536 00:22:56.919 }, 00:22:56.919 { 00:22:56.919 "name": "BaseBdev2", 00:22:56.919 "uuid": "3f35df7b-fa4b-44e3-8e78-6920e8ec93c8", 00:22:56.919 "is_configured": true, 00:22:56.919 "data_offset": 0, 00:22:56.919 "data_size": 65536 00:22:56.919 }, 00:22:56.919 { 00:22:56.919 "name": "BaseBdev3", 00:22:56.919 "uuid": "357e19f5-25d6-425d-a434-66e259c14fcb", 00:22:56.919 "is_configured": true, 00:22:56.919 "data_offset": 0, 00:22:56.919 "data_size": 65536 00:22:56.919 }, 00:22:56.919 { 00:22:56.919 "name": "BaseBdev4", 00:22:56.919 "uuid": "2ec11f8a-e6cd-4315-883b-a760e7c1da78", 00:22:56.919 "is_configured": true, 00:22:56.919 "data_offset": 0, 00:22:56.919 "data_size": 65536 00:22:56.919 } 00:22:56.919 ] 00:22:56.919 }' 00:22:56.919 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.919 02:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.853 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:57.853 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:57.853 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:57.853 02:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.853 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:57.853 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:57.853 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:58.112 [2024-07-11 02:28:48.439095] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:58.112 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:58.112 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:58.112 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.112 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:58.371 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:58.371 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:58.371 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:58.629 [2024-07-11 02:28:48.960639] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:58.629 02:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:58.629 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:58.629 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.630 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:58.888 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:58.888 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:58.888 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:59.148 [2024-07-11 02:28:49.476634] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:59.148 [2024-07-11 02:28:49.476679] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fe0d70 name Existed_Raid, state offline 00:22:59.148 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:59.148 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:59.148 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:59.148 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.408 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:59.408 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:59.408 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:59.408 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:59.408 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:59.408 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:59.666 BaseBdev2 00:22:59.666 02:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:59.666 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:59.666 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:59.666 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:59.666 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:59.666 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:59.666 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:59.925 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:00.183 [ 00:23:00.183 { 00:23:00.183 "name": "BaseBdev2", 00:23:00.183 "aliases": [ 00:23:00.183 "d2b2e2a7-f691-4c00-831d-4912a91a8203" 00:23:00.183 ], 00:23:00.183 "product_name": "Malloc disk", 00:23:00.183 "block_size": 512, 00:23:00.183 "num_blocks": 65536, 00:23:00.183 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:00.183 "assigned_rate_limits": { 00:23:00.183 "rw_ios_per_sec": 0, 00:23:00.183 "rw_mbytes_per_sec": 0, 00:23:00.183 "r_mbytes_per_sec": 0, 00:23:00.184 "w_mbytes_per_sec": 0 00:23:00.184 }, 00:23:00.184 "claimed": false, 00:23:00.184 "zoned": false, 00:23:00.184 "supported_io_types": { 00:23:00.184 "read": true, 00:23:00.184 "write": true, 00:23:00.184 "unmap": true, 00:23:00.184 "flush": true, 00:23:00.184 "reset": true, 00:23:00.184 "nvme_admin": false, 00:23:00.184 "nvme_io": false, 00:23:00.184 "nvme_io_md": false, 00:23:00.184 "write_zeroes": true, 00:23:00.184 "zcopy": true, 00:23:00.184 "get_zone_info": false, 00:23:00.184 "zone_management": false, 00:23:00.184 "zone_append": false, 00:23:00.184 "compare": false, 00:23:00.184 "compare_and_write": false, 00:23:00.184 "abort": true, 00:23:00.184 "seek_hole": false, 00:23:00.184 "seek_data": false, 00:23:00.184 "copy": true, 00:23:00.184 "nvme_iov_md": false 00:23:00.184 }, 00:23:00.184 "memory_domains": [ 00:23:00.184 { 00:23:00.184 "dma_device_id": "system", 00:23:00.184 "dma_device_type": 1 00:23:00.184 }, 00:23:00.184 { 00:23:00.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.184 "dma_device_type": 2 00:23:00.184 } 00:23:00.184 ], 00:23:00.184 "driver_specific": {} 00:23:00.184 } 00:23:00.184 ] 00:23:00.184 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:00.184 02:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:00.184 02:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:00.184 02:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:00.442 BaseBdev3 00:23:00.442 02:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:00.442 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:00.442 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:00.442 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:00.442 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:00.442 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:00.442 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.701 02:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:00.960 [ 00:23:00.960 { 00:23:00.960 "name": "BaseBdev3", 00:23:00.960 "aliases": [ 00:23:00.960 "63513400-5589-4ab8-be1f-f0471fd3400f" 00:23:00.960 ], 00:23:00.960 "product_name": "Malloc disk", 00:23:00.960 "block_size": 512, 00:23:00.960 "num_blocks": 65536, 00:23:00.960 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:00.960 "assigned_rate_limits": { 00:23:00.960 "rw_ios_per_sec": 0, 00:23:00.960 "rw_mbytes_per_sec": 0, 00:23:00.960 "r_mbytes_per_sec": 0, 00:23:00.960 "w_mbytes_per_sec": 0 00:23:00.960 }, 00:23:00.960 "claimed": false, 00:23:00.960 "zoned": false, 00:23:00.960 "supported_io_types": { 00:23:00.960 "read": true, 00:23:00.960 "write": true, 00:23:00.960 "unmap": true, 00:23:00.960 "flush": true, 00:23:00.960 "reset": true, 00:23:00.960 "nvme_admin": false, 00:23:00.960 "nvme_io": false, 00:23:00.960 "nvme_io_md": false, 00:23:00.960 "write_zeroes": true, 00:23:00.960 "zcopy": true, 00:23:00.960 "get_zone_info": false, 00:23:00.960 "zone_management": false, 00:23:00.960 "zone_append": false, 00:23:00.960 "compare": false, 00:23:00.960 "compare_and_write": false, 00:23:00.960 "abort": true, 00:23:00.960 "seek_hole": false, 00:23:00.960 "seek_data": false, 00:23:00.960 "copy": true, 00:23:00.960 "nvme_iov_md": false 00:23:00.960 }, 00:23:00.960 "memory_domains": [ 00:23:00.960 { 00:23:00.960 "dma_device_id": "system", 00:23:00.960 "dma_device_type": 1 00:23:00.960 }, 00:23:00.960 { 00:23:00.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.960 "dma_device_type": 2 00:23:00.960 } 00:23:00.960 ], 00:23:00.960 "driver_specific": {} 00:23:00.960 } 00:23:00.960 ] 00:23:00.960 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:00.960 02:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:00.960 02:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:00.960 02:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:01.219 BaseBdev4 00:23:01.219 02:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:01.219 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:01.219 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:01.219 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:01.219 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:01.219 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:01.219 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.477 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:01.736 [ 00:23:01.736 { 00:23:01.736 "name": "BaseBdev4", 00:23:01.736 "aliases": [ 00:23:01.736 "6c589270-c34b-409e-bdf8-c0c823833116" 00:23:01.736 ], 00:23:01.736 "product_name": "Malloc disk", 00:23:01.736 "block_size": 512, 00:23:01.736 "num_blocks": 65536, 00:23:01.736 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:01.736 "assigned_rate_limits": { 00:23:01.736 "rw_ios_per_sec": 0, 00:23:01.736 "rw_mbytes_per_sec": 0, 00:23:01.736 "r_mbytes_per_sec": 0, 00:23:01.736 "w_mbytes_per_sec": 0 00:23:01.736 }, 00:23:01.736 "claimed": false, 00:23:01.736 "zoned": false, 00:23:01.736 "supported_io_types": { 00:23:01.736 "read": true, 00:23:01.736 "write": true, 00:23:01.736 "unmap": true, 00:23:01.736 "flush": true, 00:23:01.736 "reset": true, 00:23:01.736 "nvme_admin": false, 00:23:01.736 "nvme_io": false, 00:23:01.736 "nvme_io_md": false, 00:23:01.736 "write_zeroes": true, 00:23:01.736 "zcopy": true, 00:23:01.736 "get_zone_info": false, 00:23:01.736 "zone_management": false, 00:23:01.736 "zone_append": false, 00:23:01.736 "compare": false, 00:23:01.736 "compare_and_write": false, 00:23:01.736 "abort": true, 00:23:01.736 "seek_hole": false, 00:23:01.736 "seek_data": false, 00:23:01.736 "copy": true, 00:23:01.736 "nvme_iov_md": false 00:23:01.736 }, 00:23:01.736 "memory_domains": [ 00:23:01.736 { 00:23:01.736 "dma_device_id": "system", 00:23:01.736 "dma_device_type": 1 00:23:01.736 }, 00:23:01.736 { 00:23:01.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.736 "dma_device_type": 2 00:23:01.736 } 00:23:01.736 ], 00:23:01.736 "driver_specific": {} 00:23:01.736 } 00:23:01.736 ] 00:23:01.736 02:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:01.736 02:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:01.736 02:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:01.736 02:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:01.995 [2024-07-11 02:28:52.178325] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.995 [2024-07-11 02:28:52.178371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.995 [2024-07-11 02:28:52.178391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:01.995 [2024-07-11 02:28:52.179713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:01.995 [2024-07-11 02:28:52.179755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.995 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.255 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.255 "name": "Existed_Raid", 00:23:02.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.255 "strip_size_kb": 64, 00:23:02.255 "state": "configuring", 00:23:02.255 "raid_level": "concat", 00:23:02.255 "superblock": false, 00:23:02.255 "num_base_bdevs": 4, 00:23:02.255 "num_base_bdevs_discovered": 3, 00:23:02.255 "num_base_bdevs_operational": 4, 00:23:02.255 "base_bdevs_list": [ 00:23:02.255 { 00:23:02.255 "name": "BaseBdev1", 00:23:02.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.255 "is_configured": false, 00:23:02.255 "data_offset": 0, 00:23:02.255 "data_size": 0 00:23:02.255 }, 00:23:02.255 { 00:23:02.255 "name": "BaseBdev2", 00:23:02.255 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:02.255 "is_configured": true, 00:23:02.255 "data_offset": 0, 00:23:02.255 "data_size": 65536 00:23:02.255 }, 00:23:02.255 { 00:23:02.255 "name": "BaseBdev3", 00:23:02.255 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:02.255 "is_configured": true, 00:23:02.255 "data_offset": 0, 00:23:02.255 "data_size": 65536 00:23:02.255 }, 00:23:02.255 { 00:23:02.255 "name": "BaseBdev4", 00:23:02.255 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:02.255 "is_configured": true, 00:23:02.255 "data_offset": 0, 00:23:02.255 "data_size": 65536 00:23:02.255 } 00:23:02.255 ] 00:23:02.255 }' 00:23:02.255 02:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.255 02:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.822 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:03.081 [2024-07-11 02:28:53.277324] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.081 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.341 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.341 "name": "Existed_Raid", 00:23:03.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.341 "strip_size_kb": 64, 00:23:03.341 "state": "configuring", 00:23:03.341 "raid_level": "concat", 00:23:03.341 "superblock": false, 00:23:03.341 "num_base_bdevs": 4, 00:23:03.341 "num_base_bdevs_discovered": 2, 00:23:03.341 "num_base_bdevs_operational": 4, 00:23:03.341 "base_bdevs_list": [ 00:23:03.341 { 00:23:03.341 "name": "BaseBdev1", 00:23:03.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.341 "is_configured": false, 00:23:03.341 "data_offset": 0, 00:23:03.341 "data_size": 0 00:23:03.341 }, 00:23:03.341 { 00:23:03.341 "name": null, 00:23:03.341 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:03.341 "is_configured": false, 00:23:03.341 "data_offset": 0, 00:23:03.341 "data_size": 65536 00:23:03.341 }, 00:23:03.341 { 00:23:03.341 "name": "BaseBdev3", 00:23:03.341 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:03.341 "is_configured": true, 00:23:03.341 "data_offset": 0, 00:23:03.341 "data_size": 65536 00:23:03.341 }, 00:23:03.341 { 00:23:03.341 "name": "BaseBdev4", 00:23:03.341 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:03.341 "is_configured": true, 00:23:03.341 "data_offset": 0, 00:23:03.341 "data_size": 65536 00:23:03.341 } 00:23:03.341 ] 00:23:03.341 }' 00:23:03.341 02:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.341 02:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.908 02:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.908 02:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:04.166 02:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:04.167 02:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:04.167 [2024-07-11 02:28:54.568109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.167 BaseBdev1 00:23:04.426 02:28:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:04.426 02:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:04.426 02:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:04.426 02:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:04.426 02:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:04.426 02:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:04.426 02:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.686 02:28:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:04.686 [ 00:23:04.686 { 00:23:04.686 "name": "BaseBdev1", 00:23:04.686 "aliases": [ 00:23:04.686 "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752" 00:23:04.686 ], 00:23:04.686 "product_name": "Malloc disk", 00:23:04.686 "block_size": 512, 00:23:04.686 "num_blocks": 65536, 00:23:04.686 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:04.686 "assigned_rate_limits": { 00:23:04.686 "rw_ios_per_sec": 0, 00:23:04.686 "rw_mbytes_per_sec": 0, 00:23:04.686 "r_mbytes_per_sec": 0, 00:23:04.686 "w_mbytes_per_sec": 0 00:23:04.686 }, 00:23:04.686 "claimed": true, 00:23:04.686 "claim_type": "exclusive_write", 00:23:04.686 "zoned": false, 00:23:04.686 "supported_io_types": { 00:23:04.686 "read": true, 00:23:04.686 "write": true, 00:23:04.686 "unmap": true, 00:23:04.686 "flush": true, 00:23:04.686 "reset": true, 00:23:04.686 "nvme_admin": false, 00:23:04.686 "nvme_io": false, 00:23:04.686 "nvme_io_md": false, 00:23:04.686 "write_zeroes": true, 00:23:04.686 "zcopy": true, 00:23:04.686 "get_zone_info": false, 00:23:04.686 "zone_management": false, 00:23:04.686 "zone_append": false, 00:23:04.686 "compare": false, 00:23:04.686 "compare_and_write": false, 00:23:04.686 "abort": true, 00:23:04.686 "seek_hole": false, 00:23:04.686 "seek_data": false, 00:23:04.686 "copy": true, 00:23:04.686 "nvme_iov_md": false 00:23:04.686 }, 00:23:04.686 "memory_domains": [ 00:23:04.686 { 00:23:04.686 "dma_device_id": "system", 00:23:04.686 "dma_device_type": 1 00:23:04.686 }, 00:23:04.686 { 00:23:04.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.686 "dma_device_type": 2 00:23:04.686 } 00:23:04.686 ], 00:23:04.686 "driver_specific": {} 00:23:04.686 } 00:23:04.686 ] 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.686 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.946 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.946 "name": "Existed_Raid", 00:23:04.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.946 "strip_size_kb": 64, 00:23:04.946 "state": "configuring", 00:23:04.946 "raid_level": "concat", 00:23:04.946 "superblock": false, 00:23:04.946 "num_base_bdevs": 4, 00:23:04.946 "num_base_bdevs_discovered": 3, 00:23:04.946 "num_base_bdevs_operational": 4, 00:23:04.946 "base_bdevs_list": [ 00:23:04.946 { 00:23:04.946 "name": "BaseBdev1", 00:23:04.946 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:04.946 "is_configured": true, 00:23:04.946 "data_offset": 0, 00:23:04.946 "data_size": 65536 00:23:04.946 }, 00:23:04.946 { 00:23:04.946 "name": null, 00:23:04.946 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:04.946 "is_configured": false, 00:23:04.946 "data_offset": 0, 00:23:04.946 "data_size": 65536 00:23:04.946 }, 00:23:04.946 { 00:23:04.946 "name": "BaseBdev3", 00:23:04.946 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:04.946 "is_configured": true, 00:23:04.946 "data_offset": 0, 00:23:04.946 "data_size": 65536 00:23:04.946 }, 00:23:04.946 { 00:23:04.946 "name": "BaseBdev4", 00:23:04.946 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:04.946 "is_configured": true, 00:23:04.946 "data_offset": 0, 00:23:04.946 "data_size": 65536 00:23:04.946 } 00:23:04.946 ] 00:23:04.946 }' 00:23:04.946 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.946 02:28:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.514 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.514 02:28:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:05.773 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:05.773 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:06.032 [2024-07-11 02:28:56.344874] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.032 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.292 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.292 "name": "Existed_Raid", 00:23:06.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.292 "strip_size_kb": 64, 00:23:06.292 "state": "configuring", 00:23:06.292 "raid_level": "concat", 00:23:06.292 "superblock": false, 00:23:06.292 "num_base_bdevs": 4, 00:23:06.292 "num_base_bdevs_discovered": 2, 00:23:06.292 "num_base_bdevs_operational": 4, 00:23:06.292 "base_bdevs_list": [ 00:23:06.292 { 00:23:06.292 "name": "BaseBdev1", 00:23:06.292 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:06.292 "is_configured": true, 00:23:06.292 "data_offset": 0, 00:23:06.292 "data_size": 65536 00:23:06.292 }, 00:23:06.292 { 00:23:06.292 "name": null, 00:23:06.292 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:06.292 "is_configured": false, 00:23:06.292 "data_offset": 0, 00:23:06.292 "data_size": 65536 00:23:06.292 }, 00:23:06.292 { 00:23:06.292 "name": null, 00:23:06.292 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:06.292 "is_configured": false, 00:23:06.292 "data_offset": 0, 00:23:06.292 "data_size": 65536 00:23:06.292 }, 00:23:06.292 { 00:23:06.292 "name": "BaseBdev4", 00:23:06.292 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:06.292 "is_configured": true, 00:23:06.292 "data_offset": 0, 00:23:06.292 "data_size": 65536 00:23:06.292 } 00:23:06.292 ] 00:23:06.292 }' 00:23:06.292 02:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.292 02:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.861 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.861 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:07.120 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:07.120 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:07.379 [2024-07-11 02:28:57.708485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:07.379 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:07.379 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.379 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.379 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:07.379 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:07.379 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.380 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.380 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.380 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.380 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.380 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.380 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.639 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.639 "name": "Existed_Raid", 00:23:07.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.639 "strip_size_kb": 64, 00:23:07.639 "state": "configuring", 00:23:07.639 "raid_level": "concat", 00:23:07.639 "superblock": false, 00:23:07.639 "num_base_bdevs": 4, 00:23:07.639 "num_base_bdevs_discovered": 3, 00:23:07.639 "num_base_bdevs_operational": 4, 00:23:07.639 "base_bdevs_list": [ 00:23:07.639 { 00:23:07.639 "name": "BaseBdev1", 00:23:07.639 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:07.639 "is_configured": true, 00:23:07.639 "data_offset": 0, 00:23:07.639 "data_size": 65536 00:23:07.639 }, 00:23:07.639 { 00:23:07.639 "name": null, 00:23:07.639 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:07.639 "is_configured": false, 00:23:07.639 "data_offset": 0, 00:23:07.639 "data_size": 65536 00:23:07.639 }, 00:23:07.639 { 00:23:07.639 "name": "BaseBdev3", 00:23:07.639 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:07.639 "is_configured": true, 00:23:07.639 "data_offset": 0, 00:23:07.639 "data_size": 65536 00:23:07.639 }, 00:23:07.639 { 00:23:07.639 "name": "BaseBdev4", 00:23:07.639 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:07.639 "is_configured": true, 00:23:07.639 "data_offset": 0, 00:23:07.639 "data_size": 65536 00:23:07.639 } 00:23:07.639 ] 00:23:07.639 }' 00:23:07.639 02:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.639 02:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.207 02:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.207 02:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:08.466 02:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:08.466 02:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:08.726 [2024-07-11 02:28:59.024210] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.726 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.988 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.988 "name": "Existed_Raid", 00:23:08.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.988 "strip_size_kb": 64, 00:23:08.988 "state": "configuring", 00:23:08.988 "raid_level": "concat", 00:23:08.988 "superblock": false, 00:23:08.988 "num_base_bdevs": 4, 00:23:08.988 "num_base_bdevs_discovered": 2, 00:23:08.988 "num_base_bdevs_operational": 4, 00:23:08.988 "base_bdevs_list": [ 00:23:08.988 { 00:23:08.988 "name": null, 00:23:08.988 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:08.988 "is_configured": false, 00:23:08.988 "data_offset": 0, 00:23:08.988 "data_size": 65536 00:23:08.988 }, 00:23:08.988 { 00:23:08.988 "name": null, 00:23:08.988 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:08.988 "is_configured": false, 00:23:08.988 "data_offset": 0, 00:23:08.988 "data_size": 65536 00:23:08.988 }, 00:23:08.988 { 00:23:08.988 "name": "BaseBdev3", 00:23:08.988 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:08.988 "is_configured": true, 00:23:08.988 "data_offset": 0, 00:23:08.988 "data_size": 65536 00:23:08.988 }, 00:23:08.988 { 00:23:08.988 "name": "BaseBdev4", 00:23:08.988 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:08.988 "is_configured": true, 00:23:08.988 "data_offset": 0, 00:23:08.988 "data_size": 65536 00:23:08.988 } 00:23:08.988 ] 00:23:08.988 }' 00:23:08.988 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.988 02:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.557 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:09.557 02:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.816 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:09.816 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:10.075 [2024-07-11 02:29:00.378267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.075 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.335 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.335 "name": "Existed_Raid", 00:23:10.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.335 "strip_size_kb": 64, 00:23:10.335 "state": "configuring", 00:23:10.335 "raid_level": "concat", 00:23:10.335 "superblock": false, 00:23:10.335 "num_base_bdevs": 4, 00:23:10.335 "num_base_bdevs_discovered": 3, 00:23:10.335 "num_base_bdevs_operational": 4, 00:23:10.335 "base_bdevs_list": [ 00:23:10.335 { 00:23:10.335 "name": null, 00:23:10.335 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:10.335 "is_configured": false, 00:23:10.335 "data_offset": 0, 00:23:10.335 "data_size": 65536 00:23:10.335 }, 00:23:10.335 { 00:23:10.335 "name": "BaseBdev2", 00:23:10.335 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:10.335 "is_configured": true, 00:23:10.335 "data_offset": 0, 00:23:10.335 "data_size": 65536 00:23:10.335 }, 00:23:10.335 { 00:23:10.335 "name": "BaseBdev3", 00:23:10.335 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:10.335 "is_configured": true, 00:23:10.335 "data_offset": 0, 00:23:10.335 "data_size": 65536 00:23:10.335 }, 00:23:10.335 { 00:23:10.335 "name": "BaseBdev4", 00:23:10.335 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:10.335 "is_configured": true, 00:23:10.335 "data_offset": 0, 00:23:10.335 "data_size": 65536 00:23:10.335 } 00:23:10.335 ] 00:23:10.335 }' 00:23:10.335 02:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.335 02:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.272 02:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.272 02:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:11.272 02:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:11.272 02:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.272 02:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:11.531 02:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b5bae9d2-ec8b-41e7-94af-ecadb4bbf752 00:23:11.791 [2024-07-11 02:29:02.059214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:11.791 [2024-07-11 02:29:02.059256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e2d880 00:23:11.791 [2024-07-11 02:29:02.059265] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:11.791 [2024-07-11 02:29:02.059465] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e1ab50 00:23:11.791 [2024-07-11 02:29:02.059582] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e2d880 00:23:11.791 [2024-07-11 02:29:02.059592] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1e2d880 00:23:11.791 [2024-07-11 02:29:02.059750] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.791 NewBaseBdev 00:23:11.791 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:11.791 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:11.791 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:11.791 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:11.791 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:11.791 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:11.791 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:12.051 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:12.313 [ 00:23:12.313 { 00:23:12.313 "name": "NewBaseBdev", 00:23:12.313 "aliases": [ 00:23:12.313 "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752" 00:23:12.313 ], 00:23:12.313 "product_name": "Malloc disk", 00:23:12.313 "block_size": 512, 00:23:12.313 "num_blocks": 65536, 00:23:12.313 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:12.313 "assigned_rate_limits": { 00:23:12.313 "rw_ios_per_sec": 0, 00:23:12.313 "rw_mbytes_per_sec": 0, 00:23:12.313 "r_mbytes_per_sec": 0, 00:23:12.313 "w_mbytes_per_sec": 0 00:23:12.313 }, 00:23:12.313 "claimed": true, 00:23:12.313 "claim_type": "exclusive_write", 00:23:12.313 "zoned": false, 00:23:12.313 "supported_io_types": { 00:23:12.313 "read": true, 00:23:12.313 "write": true, 00:23:12.313 "unmap": true, 00:23:12.313 "flush": true, 00:23:12.313 "reset": true, 00:23:12.313 "nvme_admin": false, 00:23:12.313 "nvme_io": false, 00:23:12.313 "nvme_io_md": false, 00:23:12.313 "write_zeroes": true, 00:23:12.313 "zcopy": true, 00:23:12.313 "get_zone_info": false, 00:23:12.313 "zone_management": false, 00:23:12.313 "zone_append": false, 00:23:12.313 "compare": false, 00:23:12.313 "compare_and_write": false, 00:23:12.313 "abort": true, 00:23:12.313 "seek_hole": false, 00:23:12.313 "seek_data": false, 00:23:12.313 "copy": true, 00:23:12.313 "nvme_iov_md": false 00:23:12.313 }, 00:23:12.313 "memory_domains": [ 00:23:12.313 { 00:23:12.313 "dma_device_id": "system", 00:23:12.313 "dma_device_type": 1 00:23:12.313 }, 00:23:12.313 { 00:23:12.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.313 "dma_device_type": 2 00:23:12.313 } 00:23:12.313 ], 00:23:12.313 "driver_specific": {} 00:23:12.313 } 00:23:12.313 ] 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.313 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.635 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.635 "name": "Existed_Raid", 00:23:12.635 "uuid": "6a468c95-8f77-491a-9954-5dd97a7cb64a", 00:23:12.635 "strip_size_kb": 64, 00:23:12.635 "state": "online", 00:23:12.635 "raid_level": "concat", 00:23:12.635 "superblock": false, 00:23:12.635 "num_base_bdevs": 4, 00:23:12.635 "num_base_bdevs_discovered": 4, 00:23:12.635 "num_base_bdevs_operational": 4, 00:23:12.635 "base_bdevs_list": [ 00:23:12.635 { 00:23:12.635 "name": "NewBaseBdev", 00:23:12.635 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:12.635 "is_configured": true, 00:23:12.635 "data_offset": 0, 00:23:12.635 "data_size": 65536 00:23:12.635 }, 00:23:12.635 { 00:23:12.635 "name": "BaseBdev2", 00:23:12.635 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:12.635 "is_configured": true, 00:23:12.635 "data_offset": 0, 00:23:12.635 "data_size": 65536 00:23:12.635 }, 00:23:12.635 { 00:23:12.635 "name": "BaseBdev3", 00:23:12.635 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:12.635 "is_configured": true, 00:23:12.635 "data_offset": 0, 00:23:12.635 "data_size": 65536 00:23:12.635 }, 00:23:12.635 { 00:23:12.635 "name": "BaseBdev4", 00:23:12.635 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:12.635 "is_configured": true, 00:23:12.635 "data_offset": 0, 00:23:12.635 "data_size": 65536 00:23:12.635 } 00:23:12.635 ] 00:23:12.635 }' 00:23:12.635 02:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.635 02:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:13.592 [2024-07-11 02:29:03.948536] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:13.592 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:13.592 "name": "Existed_Raid", 00:23:13.592 "aliases": [ 00:23:13.592 "6a468c95-8f77-491a-9954-5dd97a7cb64a" 00:23:13.592 ], 00:23:13.592 "product_name": "Raid Volume", 00:23:13.592 "block_size": 512, 00:23:13.592 "num_blocks": 262144, 00:23:13.592 "uuid": "6a468c95-8f77-491a-9954-5dd97a7cb64a", 00:23:13.592 "assigned_rate_limits": { 00:23:13.592 "rw_ios_per_sec": 0, 00:23:13.592 "rw_mbytes_per_sec": 0, 00:23:13.592 "r_mbytes_per_sec": 0, 00:23:13.592 "w_mbytes_per_sec": 0 00:23:13.592 }, 00:23:13.592 "claimed": false, 00:23:13.592 "zoned": false, 00:23:13.592 "supported_io_types": { 00:23:13.592 "read": true, 00:23:13.592 "write": true, 00:23:13.592 "unmap": true, 00:23:13.592 "flush": true, 00:23:13.592 "reset": true, 00:23:13.592 "nvme_admin": false, 00:23:13.592 "nvme_io": false, 00:23:13.592 "nvme_io_md": false, 00:23:13.592 "write_zeroes": true, 00:23:13.592 "zcopy": false, 00:23:13.592 "get_zone_info": false, 00:23:13.592 "zone_management": false, 00:23:13.592 "zone_append": false, 00:23:13.592 "compare": false, 00:23:13.592 "compare_and_write": false, 00:23:13.592 "abort": false, 00:23:13.592 "seek_hole": false, 00:23:13.592 "seek_data": false, 00:23:13.592 "copy": false, 00:23:13.592 "nvme_iov_md": false 00:23:13.592 }, 00:23:13.592 "memory_domains": [ 00:23:13.592 { 00:23:13.592 "dma_device_id": "system", 00:23:13.592 "dma_device_type": 1 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.592 "dma_device_type": 2 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "dma_device_id": "system", 00:23:13.592 "dma_device_type": 1 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.592 "dma_device_type": 2 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "dma_device_id": "system", 00:23:13.592 "dma_device_type": 1 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.592 "dma_device_type": 2 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "dma_device_id": "system", 00:23:13.592 "dma_device_type": 1 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.592 "dma_device_type": 2 00:23:13.592 } 00:23:13.592 ], 00:23:13.592 "driver_specific": { 00:23:13.592 "raid": { 00:23:13.592 "uuid": "6a468c95-8f77-491a-9954-5dd97a7cb64a", 00:23:13.592 "strip_size_kb": 64, 00:23:13.592 "state": "online", 00:23:13.592 "raid_level": "concat", 00:23:13.592 "superblock": false, 00:23:13.592 "num_base_bdevs": 4, 00:23:13.592 "num_base_bdevs_discovered": 4, 00:23:13.592 "num_base_bdevs_operational": 4, 00:23:13.592 "base_bdevs_list": [ 00:23:13.592 { 00:23:13.592 "name": "NewBaseBdev", 00:23:13.592 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:13.592 "is_configured": true, 00:23:13.592 "data_offset": 0, 00:23:13.592 "data_size": 65536 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "name": "BaseBdev2", 00:23:13.592 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:13.592 "is_configured": true, 00:23:13.592 "data_offset": 0, 00:23:13.592 "data_size": 65536 00:23:13.592 }, 00:23:13.592 { 00:23:13.592 "name": "BaseBdev3", 00:23:13.592 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:13.592 "is_configured": true, 00:23:13.592 "data_offset": 0, 00:23:13.593 "data_size": 65536 00:23:13.593 }, 00:23:13.593 { 00:23:13.593 "name": "BaseBdev4", 00:23:13.593 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:13.593 "is_configured": true, 00:23:13.593 "data_offset": 0, 00:23:13.593 "data_size": 65536 00:23:13.593 } 00:23:13.593 ] 00:23:13.593 } 00:23:13.593 } 00:23:13.593 }' 00:23:13.593 02:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:13.851 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:13.851 BaseBdev2 00:23:13.851 BaseBdev3 00:23:13.851 BaseBdev4' 00:23:13.851 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:13.851 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:13.851 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:14.418 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:14.418 "name": "NewBaseBdev", 00:23:14.418 "aliases": [ 00:23:14.418 "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752" 00:23:14.418 ], 00:23:14.418 "product_name": "Malloc disk", 00:23:14.418 "block_size": 512, 00:23:14.418 "num_blocks": 65536, 00:23:14.418 "uuid": "b5bae9d2-ec8b-41e7-94af-ecadb4bbf752", 00:23:14.418 "assigned_rate_limits": { 00:23:14.418 "rw_ios_per_sec": 0, 00:23:14.418 "rw_mbytes_per_sec": 0, 00:23:14.418 "r_mbytes_per_sec": 0, 00:23:14.418 "w_mbytes_per_sec": 0 00:23:14.418 }, 00:23:14.418 "claimed": true, 00:23:14.419 "claim_type": "exclusive_write", 00:23:14.419 "zoned": false, 00:23:14.419 "supported_io_types": { 00:23:14.419 "read": true, 00:23:14.419 "write": true, 00:23:14.419 "unmap": true, 00:23:14.419 "flush": true, 00:23:14.419 "reset": true, 00:23:14.419 "nvme_admin": false, 00:23:14.419 "nvme_io": false, 00:23:14.419 "nvme_io_md": false, 00:23:14.419 "write_zeroes": true, 00:23:14.419 "zcopy": true, 00:23:14.419 "get_zone_info": false, 00:23:14.419 "zone_management": false, 00:23:14.419 "zone_append": false, 00:23:14.419 "compare": false, 00:23:14.419 "compare_and_write": false, 00:23:14.419 "abort": true, 00:23:14.419 "seek_hole": false, 00:23:14.419 "seek_data": false, 00:23:14.419 "copy": true, 00:23:14.419 "nvme_iov_md": false 00:23:14.419 }, 00:23:14.419 "memory_domains": [ 00:23:14.419 { 00:23:14.419 "dma_device_id": "system", 00:23:14.419 "dma_device_type": 1 00:23:14.419 }, 00:23:14.419 { 00:23:14.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.419 "dma_device_type": 2 00:23:14.419 } 00:23:14.419 ], 00:23:14.419 "driver_specific": {} 00:23:14.419 }' 00:23:14.419 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.419 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.419 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:14.419 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.419 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.419 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:14.419 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.677 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.677 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:14.677 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.677 02:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.677 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:14.677 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:14.677 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:14.677 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.245 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.245 "name": "BaseBdev2", 00:23:15.245 "aliases": [ 00:23:15.245 "d2b2e2a7-f691-4c00-831d-4912a91a8203" 00:23:15.245 ], 00:23:15.245 "product_name": "Malloc disk", 00:23:15.245 "block_size": 512, 00:23:15.245 "num_blocks": 65536, 00:23:15.245 "uuid": "d2b2e2a7-f691-4c00-831d-4912a91a8203", 00:23:15.245 "assigned_rate_limits": { 00:23:15.245 "rw_ios_per_sec": 0, 00:23:15.245 "rw_mbytes_per_sec": 0, 00:23:15.245 "r_mbytes_per_sec": 0, 00:23:15.245 "w_mbytes_per_sec": 0 00:23:15.245 }, 00:23:15.245 "claimed": true, 00:23:15.245 "claim_type": "exclusive_write", 00:23:15.245 "zoned": false, 00:23:15.245 "supported_io_types": { 00:23:15.245 "read": true, 00:23:15.245 "write": true, 00:23:15.245 "unmap": true, 00:23:15.245 "flush": true, 00:23:15.245 "reset": true, 00:23:15.245 "nvme_admin": false, 00:23:15.245 "nvme_io": false, 00:23:15.245 "nvme_io_md": false, 00:23:15.245 "write_zeroes": true, 00:23:15.245 "zcopy": true, 00:23:15.245 "get_zone_info": false, 00:23:15.245 "zone_management": false, 00:23:15.245 "zone_append": false, 00:23:15.245 "compare": false, 00:23:15.245 "compare_and_write": false, 00:23:15.245 "abort": true, 00:23:15.245 "seek_hole": false, 00:23:15.245 "seek_data": false, 00:23:15.245 "copy": true, 00:23:15.245 "nvme_iov_md": false 00:23:15.245 }, 00:23:15.245 "memory_domains": [ 00:23:15.245 { 00:23:15.245 "dma_device_id": "system", 00:23:15.245 "dma_device_type": 1 00:23:15.245 }, 00:23:15.245 { 00:23:15.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.246 "dma_device_type": 2 00:23:15.246 } 00:23:15.246 ], 00:23:15.246 "driver_specific": {} 00:23:15.246 }' 00:23:15.246 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.246 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:15.504 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.762 02:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.762 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:15.762 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.762 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:15.762 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.330 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.330 "name": "BaseBdev3", 00:23:16.330 "aliases": [ 00:23:16.330 "63513400-5589-4ab8-be1f-f0471fd3400f" 00:23:16.330 ], 00:23:16.330 "product_name": "Malloc disk", 00:23:16.330 "block_size": 512, 00:23:16.330 "num_blocks": 65536, 00:23:16.330 "uuid": "63513400-5589-4ab8-be1f-f0471fd3400f", 00:23:16.330 "assigned_rate_limits": { 00:23:16.330 "rw_ios_per_sec": 0, 00:23:16.330 "rw_mbytes_per_sec": 0, 00:23:16.330 "r_mbytes_per_sec": 0, 00:23:16.330 "w_mbytes_per_sec": 0 00:23:16.330 }, 00:23:16.330 "claimed": true, 00:23:16.330 "claim_type": "exclusive_write", 00:23:16.330 "zoned": false, 00:23:16.330 "supported_io_types": { 00:23:16.330 "read": true, 00:23:16.330 "write": true, 00:23:16.330 "unmap": true, 00:23:16.330 "flush": true, 00:23:16.330 "reset": true, 00:23:16.330 "nvme_admin": false, 00:23:16.330 "nvme_io": false, 00:23:16.330 "nvme_io_md": false, 00:23:16.330 "write_zeroes": true, 00:23:16.330 "zcopy": true, 00:23:16.330 "get_zone_info": false, 00:23:16.330 "zone_management": false, 00:23:16.330 "zone_append": false, 00:23:16.330 "compare": false, 00:23:16.330 "compare_and_write": false, 00:23:16.330 "abort": true, 00:23:16.330 "seek_hole": false, 00:23:16.330 "seek_data": false, 00:23:16.330 "copy": true, 00:23:16.330 "nvme_iov_md": false 00:23:16.330 }, 00:23:16.330 "memory_domains": [ 00:23:16.330 { 00:23:16.330 "dma_device_id": "system", 00:23:16.330 "dma_device_type": 1 00:23:16.330 }, 00:23:16.330 { 00:23:16.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.330 "dma_device_type": 2 00:23:16.330 } 00:23:16.330 ], 00:23:16.330 "driver_specific": {} 00:23:16.330 }' 00:23:16.330 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.330 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.330 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.330 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.331 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.589 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.589 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.589 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.589 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.589 02:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.848 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.848 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.848 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.848 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:16.848 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:17.106 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:17.106 "name": "BaseBdev4", 00:23:17.106 "aliases": [ 00:23:17.106 "6c589270-c34b-409e-bdf8-c0c823833116" 00:23:17.106 ], 00:23:17.106 "product_name": "Malloc disk", 00:23:17.106 "block_size": 512, 00:23:17.106 "num_blocks": 65536, 00:23:17.106 "uuid": "6c589270-c34b-409e-bdf8-c0c823833116", 00:23:17.106 "assigned_rate_limits": { 00:23:17.106 "rw_ios_per_sec": 0, 00:23:17.106 "rw_mbytes_per_sec": 0, 00:23:17.106 "r_mbytes_per_sec": 0, 00:23:17.106 "w_mbytes_per_sec": 0 00:23:17.106 }, 00:23:17.106 "claimed": true, 00:23:17.106 "claim_type": "exclusive_write", 00:23:17.106 "zoned": false, 00:23:17.106 "supported_io_types": { 00:23:17.106 "read": true, 00:23:17.106 "write": true, 00:23:17.106 "unmap": true, 00:23:17.106 "flush": true, 00:23:17.106 "reset": true, 00:23:17.106 "nvme_admin": false, 00:23:17.106 "nvme_io": false, 00:23:17.106 "nvme_io_md": false, 00:23:17.106 "write_zeroes": true, 00:23:17.106 "zcopy": true, 00:23:17.106 "get_zone_info": false, 00:23:17.106 "zone_management": false, 00:23:17.106 "zone_append": false, 00:23:17.106 "compare": false, 00:23:17.106 "compare_and_write": false, 00:23:17.106 "abort": true, 00:23:17.106 "seek_hole": false, 00:23:17.106 "seek_data": false, 00:23:17.106 "copy": true, 00:23:17.106 "nvme_iov_md": false 00:23:17.106 }, 00:23:17.106 "memory_domains": [ 00:23:17.106 { 00:23:17.106 "dma_device_id": "system", 00:23:17.106 "dma_device_type": 1 00:23:17.106 }, 00:23:17.106 { 00:23:17.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.106 "dma_device_type": 2 00:23:17.106 } 00:23:17.106 ], 00:23:17.106 "driver_specific": {} 00:23:17.106 }' 00:23:17.106 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.106 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.364 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.623 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.623 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.623 02:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:17.881 [2024-07-11 02:29:08.099261] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.881 [2024-07-11 02:29:08.099286] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.881 [2024-07-11 02:29:08.099334] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.881 [2024-07-11 02:29:08.099392] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.881 [2024-07-11 02:29:08.099404] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e2d880 name Existed_Raid, state offline 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1973218 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1973218 ']' 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1973218 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1973218 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1973218' 00:23:17.881 killing process with pid 1973218 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1973218 00:23:17.881 [2024-07-11 02:29:08.171282] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.881 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1973218 00:23:17.881 [2024-07-11 02:29:08.209024] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:18.140 02:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:18.140 00:23:18.140 real 0m34.447s 00:23:18.140 user 1m3.266s 00:23:18.141 sys 0m6.103s 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.141 ************************************ 00:23:18.141 END TEST raid_state_function_test 00:23:18.141 ************************************ 00:23:18.141 02:29:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:18.141 02:29:08 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:23:18.141 02:29:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:18.141 02:29:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.141 02:29:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.141 ************************************ 00:23:18.141 START TEST raid_state_function_test_sb 00:23:18.141 ************************************ 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1978387 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1978387' 00:23:18.141 Process raid pid: 1978387 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1978387 /var/tmp/spdk-raid.sock 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1978387 ']' 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:18.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.141 02:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.141 [2024-07-11 02:29:08.554457] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:18.141 [2024-07-11 02:29:08.554520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.400 [2024-07-11 02:29:08.693851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.400 [2024-07-11 02:29:08.743669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.400 [2024-07-11 02:29:08.801960] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.400 [2024-07-11 02:29:08.801994] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:19.335 [2024-07-11 02:29:09.699501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:19.335 [2024-07-11 02:29:09.699543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:19.335 [2024-07-11 02:29:09.699554] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:19.335 [2024-07-11 02:29:09.699565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:19.335 [2024-07-11 02:29:09.699578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:19.335 [2024-07-11 02:29:09.699589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:19.335 [2024-07-11 02:29:09.699598] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:19.335 [2024-07-11 02:29:09.699609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.335 02:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.904 02:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.904 "name": "Existed_Raid", 00:23:19.904 "uuid": "842926c5-3db5-4df0-b0b3-0f4911d2e531", 00:23:19.904 "strip_size_kb": 64, 00:23:19.904 "state": "configuring", 00:23:19.904 "raid_level": "concat", 00:23:19.904 "superblock": true, 00:23:19.904 "num_base_bdevs": 4, 00:23:19.904 "num_base_bdevs_discovered": 0, 00:23:19.904 "num_base_bdevs_operational": 4, 00:23:19.904 "base_bdevs_list": [ 00:23:19.904 { 00:23:19.904 "name": "BaseBdev1", 00:23:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.904 "is_configured": false, 00:23:19.904 "data_offset": 0, 00:23:19.904 "data_size": 0 00:23:19.904 }, 00:23:19.904 { 00:23:19.904 "name": "BaseBdev2", 00:23:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.904 "is_configured": false, 00:23:19.904 "data_offset": 0, 00:23:19.904 "data_size": 0 00:23:19.904 }, 00:23:19.904 { 00:23:19.904 "name": "BaseBdev3", 00:23:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.904 "is_configured": false, 00:23:19.904 "data_offset": 0, 00:23:19.904 "data_size": 0 00:23:19.904 }, 00:23:19.904 { 00:23:19.904 "name": "BaseBdev4", 00:23:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.904 "is_configured": false, 00:23:19.904 "data_offset": 0, 00:23:19.904 "data_size": 0 00:23:19.904 } 00:23:19.904 ] 00:23:19.904 }' 00:23:19.904 02:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.904 02:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.841 02:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:21.099 [2024-07-11 02:29:11.391797] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:21.100 [2024-07-11 02:29:11.391827] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x140d5a0 name Existed_Raid, state configuring 00:23:21.100 02:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:21.359 [2024-07-11 02:29:11.584331] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:21.359 [2024-07-11 02:29:11.584360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:21.359 [2024-07-11 02:29:11.584369] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:21.359 [2024-07-11 02:29:11.584385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:21.359 [2024-07-11 02:29:11.584393] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:21.359 [2024-07-11 02:29:11.584404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:21.359 [2024-07-11 02:29:11.584413] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:21.359 [2024-07-11 02:29:11.584424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:21.359 02:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:21.618 [2024-07-11 02:29:11.842694] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.618 BaseBdev1 00:23:21.618 02:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:21.618 02:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:21.618 02:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:21.618 02:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:21.618 02:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:21.618 02:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:21.618 02:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:21.618 02:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:21.877 [ 00:23:21.877 { 00:23:21.877 "name": "BaseBdev1", 00:23:21.877 "aliases": [ 00:23:21.877 "f566d7f9-7474-4262-b21f-11b6577bcfdd" 00:23:21.877 ], 00:23:21.877 "product_name": "Malloc disk", 00:23:21.877 "block_size": 512, 00:23:21.877 "num_blocks": 65536, 00:23:21.877 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:21.877 "assigned_rate_limits": { 00:23:21.877 "rw_ios_per_sec": 0, 00:23:21.877 "rw_mbytes_per_sec": 0, 00:23:21.877 "r_mbytes_per_sec": 0, 00:23:21.877 "w_mbytes_per_sec": 0 00:23:21.877 }, 00:23:21.877 "claimed": true, 00:23:21.877 "claim_type": "exclusive_write", 00:23:21.877 "zoned": false, 00:23:21.877 "supported_io_types": { 00:23:21.877 "read": true, 00:23:21.877 "write": true, 00:23:21.877 "unmap": true, 00:23:21.877 "flush": true, 00:23:21.877 "reset": true, 00:23:21.877 "nvme_admin": false, 00:23:21.877 "nvme_io": false, 00:23:21.877 "nvme_io_md": false, 00:23:21.877 "write_zeroes": true, 00:23:21.877 "zcopy": true, 00:23:21.877 "get_zone_info": false, 00:23:21.877 "zone_management": false, 00:23:21.877 "zone_append": false, 00:23:21.877 "compare": false, 00:23:21.877 "compare_and_write": false, 00:23:21.877 "abort": true, 00:23:21.877 "seek_hole": false, 00:23:21.877 "seek_data": false, 00:23:21.877 "copy": true, 00:23:21.877 "nvme_iov_md": false 00:23:21.877 }, 00:23:21.877 "memory_domains": [ 00:23:21.877 { 00:23:21.877 "dma_device_id": "system", 00:23:21.877 "dma_device_type": 1 00:23:21.877 }, 00:23:21.877 { 00:23:21.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.877 "dma_device_type": 2 00:23:21.877 } 00:23:21.877 ], 00:23:21.877 "driver_specific": {} 00:23:21.877 } 00:23:21.877 ] 00:23:21.877 02:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:21.877 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:21.877 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:21.877 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:21.877 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.878 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.137 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.137 "name": "Existed_Raid", 00:23:22.137 "uuid": "725c5e4a-4b53-4198-b774-a10681b83b4e", 00:23:22.137 "strip_size_kb": 64, 00:23:22.137 "state": "configuring", 00:23:22.137 "raid_level": "concat", 00:23:22.137 "superblock": true, 00:23:22.137 "num_base_bdevs": 4, 00:23:22.137 "num_base_bdevs_discovered": 1, 00:23:22.137 "num_base_bdevs_operational": 4, 00:23:22.137 "base_bdevs_list": [ 00:23:22.137 { 00:23:22.137 "name": "BaseBdev1", 00:23:22.137 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:22.137 "is_configured": true, 00:23:22.137 "data_offset": 2048, 00:23:22.137 "data_size": 63488 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "name": "BaseBdev2", 00:23:22.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.137 "is_configured": false, 00:23:22.137 "data_offset": 0, 00:23:22.137 "data_size": 0 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "name": "BaseBdev3", 00:23:22.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.137 "is_configured": false, 00:23:22.137 "data_offset": 0, 00:23:22.137 "data_size": 0 00:23:22.137 }, 00:23:22.137 { 00:23:22.137 "name": "BaseBdev4", 00:23:22.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.137 "is_configured": false, 00:23:22.137 "data_offset": 0, 00:23:22.137 "data_size": 0 00:23:22.137 } 00:23:22.137 ] 00:23:22.137 }' 00:23:22.137 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.137 02:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.704 02:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:22.963 [2024-07-11 02:29:13.146143] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.963 [2024-07-11 02:29:13.146181] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x140ced0 name Existed_Raid, state configuring 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:22.963 [2024-07-11 02:29:13.326672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.963 [2024-07-11 02:29:13.328077] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.963 [2024-07-11 02:29:13.328108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.963 [2024-07-11 02:29:13.328118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:22.963 [2024-07-11 02:29:13.328130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:22.963 [2024-07-11 02:29:13.328139] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:22.963 [2024-07-11 02:29:13.328150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.963 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.222 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.222 "name": "Existed_Raid", 00:23:23.222 "uuid": "34b316ae-5645-4b55-bc9a-905c277806c9", 00:23:23.222 "strip_size_kb": 64, 00:23:23.222 "state": "configuring", 00:23:23.222 "raid_level": "concat", 00:23:23.222 "superblock": true, 00:23:23.222 "num_base_bdevs": 4, 00:23:23.222 "num_base_bdevs_discovered": 1, 00:23:23.222 "num_base_bdevs_operational": 4, 00:23:23.222 "base_bdevs_list": [ 00:23:23.222 { 00:23:23.222 "name": "BaseBdev1", 00:23:23.222 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:23.222 "is_configured": true, 00:23:23.222 "data_offset": 2048, 00:23:23.222 "data_size": 63488 00:23:23.222 }, 00:23:23.222 { 00:23:23.222 "name": "BaseBdev2", 00:23:23.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.222 "is_configured": false, 00:23:23.222 "data_offset": 0, 00:23:23.222 "data_size": 0 00:23:23.222 }, 00:23:23.222 { 00:23:23.222 "name": "BaseBdev3", 00:23:23.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.222 "is_configured": false, 00:23:23.222 "data_offset": 0, 00:23:23.222 "data_size": 0 00:23:23.222 }, 00:23:23.222 { 00:23:23.222 "name": "BaseBdev4", 00:23:23.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.222 "is_configured": false, 00:23:23.222 "data_offset": 0, 00:23:23.223 "data_size": 0 00:23:23.223 } 00:23:23.223 ] 00:23:23.223 }' 00:23:23.223 02:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.223 02:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.789 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:24.048 [2024-07-11 02:29:14.216547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:24.048 BaseBdev2 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:24.048 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:24.307 [ 00:23:24.307 { 00:23:24.307 "name": "BaseBdev2", 00:23:24.307 "aliases": [ 00:23:24.307 "b4dbadf7-577b-4559-b798-89176053a18f" 00:23:24.307 ], 00:23:24.307 "product_name": "Malloc disk", 00:23:24.307 "block_size": 512, 00:23:24.307 "num_blocks": 65536, 00:23:24.307 "uuid": "b4dbadf7-577b-4559-b798-89176053a18f", 00:23:24.307 "assigned_rate_limits": { 00:23:24.307 "rw_ios_per_sec": 0, 00:23:24.307 "rw_mbytes_per_sec": 0, 00:23:24.307 "r_mbytes_per_sec": 0, 00:23:24.307 "w_mbytes_per_sec": 0 00:23:24.307 }, 00:23:24.307 "claimed": true, 00:23:24.307 "claim_type": "exclusive_write", 00:23:24.307 "zoned": false, 00:23:24.307 "supported_io_types": { 00:23:24.307 "read": true, 00:23:24.307 "write": true, 00:23:24.307 "unmap": true, 00:23:24.307 "flush": true, 00:23:24.307 "reset": true, 00:23:24.307 "nvme_admin": false, 00:23:24.307 "nvme_io": false, 00:23:24.307 "nvme_io_md": false, 00:23:24.307 "write_zeroes": true, 00:23:24.307 "zcopy": true, 00:23:24.307 "get_zone_info": false, 00:23:24.307 "zone_management": false, 00:23:24.307 "zone_append": false, 00:23:24.307 "compare": false, 00:23:24.307 "compare_and_write": false, 00:23:24.307 "abort": true, 00:23:24.307 "seek_hole": false, 00:23:24.307 "seek_data": false, 00:23:24.307 "copy": true, 00:23:24.307 "nvme_iov_md": false 00:23:24.307 }, 00:23:24.307 "memory_domains": [ 00:23:24.307 { 00:23:24.307 "dma_device_id": "system", 00:23:24.307 "dma_device_type": 1 00:23:24.307 }, 00:23:24.307 { 00:23:24.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.307 "dma_device_type": 2 00:23:24.307 } 00:23:24.307 ], 00:23:24.307 "driver_specific": {} 00:23:24.307 } 00:23:24.307 ] 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.307 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.566 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.566 "name": "Existed_Raid", 00:23:24.566 "uuid": "34b316ae-5645-4b55-bc9a-905c277806c9", 00:23:24.566 "strip_size_kb": 64, 00:23:24.566 "state": "configuring", 00:23:24.566 "raid_level": "concat", 00:23:24.566 "superblock": true, 00:23:24.566 "num_base_bdevs": 4, 00:23:24.566 "num_base_bdevs_discovered": 2, 00:23:24.566 "num_base_bdevs_operational": 4, 00:23:24.566 "base_bdevs_list": [ 00:23:24.566 { 00:23:24.566 "name": "BaseBdev1", 00:23:24.566 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:24.566 "is_configured": true, 00:23:24.566 "data_offset": 2048, 00:23:24.566 "data_size": 63488 00:23:24.566 }, 00:23:24.566 { 00:23:24.566 "name": "BaseBdev2", 00:23:24.566 "uuid": "b4dbadf7-577b-4559-b798-89176053a18f", 00:23:24.566 "is_configured": true, 00:23:24.566 "data_offset": 2048, 00:23:24.566 "data_size": 63488 00:23:24.566 }, 00:23:24.566 { 00:23:24.566 "name": "BaseBdev3", 00:23:24.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.566 "is_configured": false, 00:23:24.566 "data_offset": 0, 00:23:24.566 "data_size": 0 00:23:24.566 }, 00:23:24.567 { 00:23:24.567 "name": "BaseBdev4", 00:23:24.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.567 "is_configured": false, 00:23:24.567 "data_offset": 0, 00:23:24.567 "data_size": 0 00:23:24.567 } 00:23:24.567 ] 00:23:24.567 }' 00:23:24.567 02:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.567 02:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.134 02:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:25.134 [2024-07-11 02:29:15.551602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:25.134 BaseBdev3 00:23:25.393 02:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:25.393 02:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:25.393 02:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:25.393 02:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:25.393 02:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:25.393 02:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:25.393 02:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:25.652 02:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:25.652 [ 00:23:25.652 { 00:23:25.652 "name": "BaseBdev3", 00:23:25.652 "aliases": [ 00:23:25.652 "1b79334e-2efd-4ea3-85f5-71d2a00f0468" 00:23:25.652 ], 00:23:25.652 "product_name": "Malloc disk", 00:23:25.652 "block_size": 512, 00:23:25.652 "num_blocks": 65536, 00:23:25.652 "uuid": "1b79334e-2efd-4ea3-85f5-71d2a00f0468", 00:23:25.652 "assigned_rate_limits": { 00:23:25.652 "rw_ios_per_sec": 0, 00:23:25.652 "rw_mbytes_per_sec": 0, 00:23:25.652 "r_mbytes_per_sec": 0, 00:23:25.652 "w_mbytes_per_sec": 0 00:23:25.652 }, 00:23:25.652 "claimed": true, 00:23:25.652 "claim_type": "exclusive_write", 00:23:25.652 "zoned": false, 00:23:25.652 "supported_io_types": { 00:23:25.652 "read": true, 00:23:25.652 "write": true, 00:23:25.652 "unmap": true, 00:23:25.652 "flush": true, 00:23:25.652 "reset": true, 00:23:25.652 "nvme_admin": false, 00:23:25.652 "nvme_io": false, 00:23:25.652 "nvme_io_md": false, 00:23:25.652 "write_zeroes": true, 00:23:25.652 "zcopy": true, 00:23:25.652 "get_zone_info": false, 00:23:25.652 "zone_management": false, 00:23:25.652 "zone_append": false, 00:23:25.652 "compare": false, 00:23:25.652 "compare_and_write": false, 00:23:25.652 "abort": true, 00:23:25.652 "seek_hole": false, 00:23:25.652 "seek_data": false, 00:23:25.652 "copy": true, 00:23:25.652 "nvme_iov_md": false 00:23:25.652 }, 00:23:25.652 "memory_domains": [ 00:23:25.652 { 00:23:25.652 "dma_device_id": "system", 00:23:25.652 "dma_device_type": 1 00:23:25.652 }, 00:23:25.652 { 00:23:25.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.652 "dma_device_type": 2 00:23:25.652 } 00:23:25.652 ], 00:23:25.652 "driver_specific": {} 00:23:25.652 } 00:23:25.652 ] 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.652 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.911 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.911 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.911 "name": "Existed_Raid", 00:23:25.911 "uuid": "34b316ae-5645-4b55-bc9a-905c277806c9", 00:23:25.911 "strip_size_kb": 64, 00:23:25.911 "state": "configuring", 00:23:25.911 "raid_level": "concat", 00:23:25.911 "superblock": true, 00:23:25.911 "num_base_bdevs": 4, 00:23:25.911 "num_base_bdevs_discovered": 3, 00:23:25.911 "num_base_bdevs_operational": 4, 00:23:25.911 "base_bdevs_list": [ 00:23:25.911 { 00:23:25.911 "name": "BaseBdev1", 00:23:25.911 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:25.911 "is_configured": true, 00:23:25.911 "data_offset": 2048, 00:23:25.911 "data_size": 63488 00:23:25.911 }, 00:23:25.911 { 00:23:25.911 "name": "BaseBdev2", 00:23:25.911 "uuid": "b4dbadf7-577b-4559-b798-89176053a18f", 00:23:25.911 "is_configured": true, 00:23:25.911 "data_offset": 2048, 00:23:25.911 "data_size": 63488 00:23:25.911 }, 00:23:25.911 { 00:23:25.911 "name": "BaseBdev3", 00:23:25.911 "uuid": "1b79334e-2efd-4ea3-85f5-71d2a00f0468", 00:23:25.911 "is_configured": true, 00:23:25.911 "data_offset": 2048, 00:23:25.911 "data_size": 63488 00:23:25.911 }, 00:23:25.911 { 00:23:25.911 "name": "BaseBdev4", 00:23:25.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.911 "is_configured": false, 00:23:25.911 "data_offset": 0, 00:23:25.911 "data_size": 0 00:23:25.911 } 00:23:25.911 ] 00:23:25.911 }' 00:23:25.911 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.911 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.479 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:26.738 [2024-07-11 02:29:16.950682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:26.738 [2024-07-11 02:29:16.950859] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15bfd70 00:23:26.738 [2024-07-11 02:29:16.950873] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:26.738 [2024-07-11 02:29:16.951050] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14121f0 00:23:26.738 [2024-07-11 02:29:16.951176] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15bfd70 00:23:26.738 [2024-07-11 02:29:16.951186] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15bfd70 00:23:26.738 [2024-07-11 02:29:16.951275] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.738 BaseBdev4 00:23:26.738 02:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:26.738 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:26.738 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:26.738 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:26.738 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:26.738 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:26.738 02:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:26.997 02:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:27.257 [ 00:23:27.257 { 00:23:27.257 "name": "BaseBdev4", 00:23:27.257 "aliases": [ 00:23:27.257 "417ec62f-a50a-4d3c-9da7-a3d14d0aca8b" 00:23:27.257 ], 00:23:27.257 "product_name": "Malloc disk", 00:23:27.257 "block_size": 512, 00:23:27.257 "num_blocks": 65536, 00:23:27.257 "uuid": "417ec62f-a50a-4d3c-9da7-a3d14d0aca8b", 00:23:27.257 "assigned_rate_limits": { 00:23:27.257 "rw_ios_per_sec": 0, 00:23:27.257 "rw_mbytes_per_sec": 0, 00:23:27.257 "r_mbytes_per_sec": 0, 00:23:27.257 "w_mbytes_per_sec": 0 00:23:27.257 }, 00:23:27.257 "claimed": true, 00:23:27.257 "claim_type": "exclusive_write", 00:23:27.257 "zoned": false, 00:23:27.257 "supported_io_types": { 00:23:27.257 "read": true, 00:23:27.257 "write": true, 00:23:27.257 "unmap": true, 00:23:27.257 "flush": true, 00:23:27.257 "reset": true, 00:23:27.257 "nvme_admin": false, 00:23:27.257 "nvme_io": false, 00:23:27.257 "nvme_io_md": false, 00:23:27.257 "write_zeroes": true, 00:23:27.257 "zcopy": true, 00:23:27.257 "get_zone_info": false, 00:23:27.257 "zone_management": false, 00:23:27.257 "zone_append": false, 00:23:27.257 "compare": false, 00:23:27.257 "compare_and_write": false, 00:23:27.257 "abort": true, 00:23:27.257 "seek_hole": false, 00:23:27.257 "seek_data": false, 00:23:27.257 "copy": true, 00:23:27.257 "nvme_iov_md": false 00:23:27.257 }, 00:23:27.257 "memory_domains": [ 00:23:27.257 { 00:23:27.257 "dma_device_id": "system", 00:23:27.257 "dma_device_type": 1 00:23:27.257 }, 00:23:27.257 { 00:23:27.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.257 "dma_device_type": 2 00:23:27.257 } 00:23:27.257 ], 00:23:27.257 "driver_specific": {} 00:23:27.257 } 00:23:27.257 ] 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.257 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.257 "name": "Existed_Raid", 00:23:27.257 "uuid": "34b316ae-5645-4b55-bc9a-905c277806c9", 00:23:27.257 "strip_size_kb": 64, 00:23:27.257 "state": "online", 00:23:27.257 "raid_level": "concat", 00:23:27.257 "superblock": true, 00:23:27.257 "num_base_bdevs": 4, 00:23:27.257 "num_base_bdevs_discovered": 4, 00:23:27.257 "num_base_bdevs_operational": 4, 00:23:27.257 "base_bdevs_list": [ 00:23:27.257 { 00:23:27.257 "name": "BaseBdev1", 00:23:27.257 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:27.257 "is_configured": true, 00:23:27.257 "data_offset": 2048, 00:23:27.257 "data_size": 63488 00:23:27.257 }, 00:23:27.257 { 00:23:27.257 "name": "BaseBdev2", 00:23:27.257 "uuid": "b4dbadf7-577b-4559-b798-89176053a18f", 00:23:27.258 "is_configured": true, 00:23:27.258 "data_offset": 2048, 00:23:27.258 "data_size": 63488 00:23:27.258 }, 00:23:27.258 { 00:23:27.258 "name": "BaseBdev3", 00:23:27.258 "uuid": "1b79334e-2efd-4ea3-85f5-71d2a00f0468", 00:23:27.258 "is_configured": true, 00:23:27.258 "data_offset": 2048, 00:23:27.258 "data_size": 63488 00:23:27.258 }, 00:23:27.258 { 00:23:27.258 "name": "BaseBdev4", 00:23:27.258 "uuid": "417ec62f-a50a-4d3c-9da7-a3d14d0aca8b", 00:23:27.258 "is_configured": true, 00:23:27.258 "data_offset": 2048, 00:23:27.258 "data_size": 63488 00:23:27.258 } 00:23:27.258 ] 00:23:27.258 }' 00:23:27.258 02:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.258 02:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:28.194 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:28.453 [2024-07-11 02:29:18.860086] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.712 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:28.712 "name": "Existed_Raid", 00:23:28.712 "aliases": [ 00:23:28.712 "34b316ae-5645-4b55-bc9a-905c277806c9" 00:23:28.712 ], 00:23:28.712 "product_name": "Raid Volume", 00:23:28.712 "block_size": 512, 00:23:28.712 "num_blocks": 253952, 00:23:28.712 "uuid": "34b316ae-5645-4b55-bc9a-905c277806c9", 00:23:28.712 "assigned_rate_limits": { 00:23:28.712 "rw_ios_per_sec": 0, 00:23:28.712 "rw_mbytes_per_sec": 0, 00:23:28.712 "r_mbytes_per_sec": 0, 00:23:28.712 "w_mbytes_per_sec": 0 00:23:28.712 }, 00:23:28.712 "claimed": false, 00:23:28.712 "zoned": false, 00:23:28.712 "supported_io_types": { 00:23:28.712 "read": true, 00:23:28.712 "write": true, 00:23:28.712 "unmap": true, 00:23:28.712 "flush": true, 00:23:28.712 "reset": true, 00:23:28.712 "nvme_admin": false, 00:23:28.712 "nvme_io": false, 00:23:28.712 "nvme_io_md": false, 00:23:28.712 "write_zeroes": true, 00:23:28.712 "zcopy": false, 00:23:28.712 "get_zone_info": false, 00:23:28.712 "zone_management": false, 00:23:28.712 "zone_append": false, 00:23:28.712 "compare": false, 00:23:28.712 "compare_and_write": false, 00:23:28.712 "abort": false, 00:23:28.712 "seek_hole": false, 00:23:28.712 "seek_data": false, 00:23:28.712 "copy": false, 00:23:28.712 "nvme_iov_md": false 00:23:28.712 }, 00:23:28.712 "memory_domains": [ 00:23:28.712 { 00:23:28.712 "dma_device_id": "system", 00:23:28.712 "dma_device_type": 1 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.712 "dma_device_type": 2 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "dma_device_id": "system", 00:23:28.712 "dma_device_type": 1 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.712 "dma_device_type": 2 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "dma_device_id": "system", 00:23:28.712 "dma_device_type": 1 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.712 "dma_device_type": 2 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "dma_device_id": "system", 00:23:28.712 "dma_device_type": 1 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.712 "dma_device_type": 2 00:23:28.712 } 00:23:28.712 ], 00:23:28.712 "driver_specific": { 00:23:28.712 "raid": { 00:23:28.712 "uuid": "34b316ae-5645-4b55-bc9a-905c277806c9", 00:23:28.712 "strip_size_kb": 64, 00:23:28.712 "state": "online", 00:23:28.712 "raid_level": "concat", 00:23:28.712 "superblock": true, 00:23:28.712 "num_base_bdevs": 4, 00:23:28.712 "num_base_bdevs_discovered": 4, 00:23:28.712 "num_base_bdevs_operational": 4, 00:23:28.712 "base_bdevs_list": [ 00:23:28.712 { 00:23:28.712 "name": "BaseBdev1", 00:23:28.712 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:28.712 "is_configured": true, 00:23:28.712 "data_offset": 2048, 00:23:28.712 "data_size": 63488 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "name": "BaseBdev2", 00:23:28.712 "uuid": "b4dbadf7-577b-4559-b798-89176053a18f", 00:23:28.712 "is_configured": true, 00:23:28.712 "data_offset": 2048, 00:23:28.712 "data_size": 63488 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "name": "BaseBdev3", 00:23:28.712 "uuid": "1b79334e-2efd-4ea3-85f5-71d2a00f0468", 00:23:28.712 "is_configured": true, 00:23:28.712 "data_offset": 2048, 00:23:28.712 "data_size": 63488 00:23:28.712 }, 00:23:28.712 { 00:23:28.712 "name": "BaseBdev4", 00:23:28.712 "uuid": "417ec62f-a50a-4d3c-9da7-a3d14d0aca8b", 00:23:28.712 "is_configured": true, 00:23:28.712 "data_offset": 2048, 00:23:28.712 "data_size": 63488 00:23:28.712 } 00:23:28.712 ] 00:23:28.712 } 00:23:28.712 } 00:23:28.712 }' 00:23:28.712 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.712 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:28.712 BaseBdev2 00:23:28.712 BaseBdev3 00:23:28.712 BaseBdev4' 00:23:28.712 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.713 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:28.713 02:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:28.971 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:28.971 "name": "BaseBdev1", 00:23:28.971 "aliases": [ 00:23:28.971 "f566d7f9-7474-4262-b21f-11b6577bcfdd" 00:23:28.971 ], 00:23:28.971 "product_name": "Malloc disk", 00:23:28.971 "block_size": 512, 00:23:28.971 "num_blocks": 65536, 00:23:28.972 "uuid": "f566d7f9-7474-4262-b21f-11b6577bcfdd", 00:23:28.972 "assigned_rate_limits": { 00:23:28.972 "rw_ios_per_sec": 0, 00:23:28.972 "rw_mbytes_per_sec": 0, 00:23:28.972 "r_mbytes_per_sec": 0, 00:23:28.972 "w_mbytes_per_sec": 0 00:23:28.972 }, 00:23:28.972 "claimed": true, 00:23:28.972 "claim_type": "exclusive_write", 00:23:28.972 "zoned": false, 00:23:28.972 "supported_io_types": { 00:23:28.972 "read": true, 00:23:28.972 "write": true, 00:23:28.972 "unmap": true, 00:23:28.972 "flush": true, 00:23:28.972 "reset": true, 00:23:28.972 "nvme_admin": false, 00:23:28.972 "nvme_io": false, 00:23:28.972 "nvme_io_md": false, 00:23:28.972 "write_zeroes": true, 00:23:28.972 "zcopy": true, 00:23:28.972 "get_zone_info": false, 00:23:28.972 "zone_management": false, 00:23:28.972 "zone_append": false, 00:23:28.972 "compare": false, 00:23:28.972 "compare_and_write": false, 00:23:28.972 "abort": true, 00:23:28.972 "seek_hole": false, 00:23:28.972 "seek_data": false, 00:23:28.972 "copy": true, 00:23:28.972 "nvme_iov_md": false 00:23:28.972 }, 00:23:28.972 "memory_domains": [ 00:23:28.972 { 00:23:28.972 "dma_device_id": "system", 00:23:28.972 "dma_device_type": 1 00:23:28.972 }, 00:23:28.972 { 00:23:28.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.972 "dma_device_type": 2 00:23:28.972 } 00:23:28.972 ], 00:23:28.972 "driver_specific": {} 00:23:28.972 }' 00:23:28.972 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.972 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.972 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:28.972 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.972 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.972 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:28.972 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:29.230 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.488 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.488 "name": "BaseBdev2", 00:23:29.488 "aliases": [ 00:23:29.488 "b4dbadf7-577b-4559-b798-89176053a18f" 00:23:29.488 ], 00:23:29.488 "product_name": "Malloc disk", 00:23:29.488 "block_size": 512, 00:23:29.488 "num_blocks": 65536, 00:23:29.488 "uuid": "b4dbadf7-577b-4559-b798-89176053a18f", 00:23:29.488 "assigned_rate_limits": { 00:23:29.488 "rw_ios_per_sec": 0, 00:23:29.488 "rw_mbytes_per_sec": 0, 00:23:29.488 "r_mbytes_per_sec": 0, 00:23:29.488 "w_mbytes_per_sec": 0 00:23:29.488 }, 00:23:29.488 "claimed": true, 00:23:29.488 "claim_type": "exclusive_write", 00:23:29.488 "zoned": false, 00:23:29.488 "supported_io_types": { 00:23:29.488 "read": true, 00:23:29.488 "write": true, 00:23:29.488 "unmap": true, 00:23:29.488 "flush": true, 00:23:29.488 "reset": true, 00:23:29.488 "nvme_admin": false, 00:23:29.488 "nvme_io": false, 00:23:29.488 "nvme_io_md": false, 00:23:29.488 "write_zeroes": true, 00:23:29.488 "zcopy": true, 00:23:29.488 "get_zone_info": false, 00:23:29.488 "zone_management": false, 00:23:29.488 "zone_append": false, 00:23:29.488 "compare": false, 00:23:29.488 "compare_and_write": false, 00:23:29.488 "abort": true, 00:23:29.488 "seek_hole": false, 00:23:29.488 "seek_data": false, 00:23:29.488 "copy": true, 00:23:29.488 "nvme_iov_md": false 00:23:29.488 }, 00:23:29.488 "memory_domains": [ 00:23:29.488 { 00:23:29.488 "dma_device_id": "system", 00:23:29.488 "dma_device_type": 1 00:23:29.488 }, 00:23:29.488 { 00:23:29.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.488 "dma_device_type": 2 00:23:29.488 } 00:23:29.488 ], 00:23:29.488 "driver_specific": {} 00:23:29.488 }' 00:23:29.488 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.488 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.488 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.488 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.746 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.746 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:29.746 02:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:29.746 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.003 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.003 "name": "BaseBdev3", 00:23:30.003 "aliases": [ 00:23:30.003 "1b79334e-2efd-4ea3-85f5-71d2a00f0468" 00:23:30.003 ], 00:23:30.003 "product_name": "Malloc disk", 00:23:30.003 "block_size": 512, 00:23:30.003 "num_blocks": 65536, 00:23:30.003 "uuid": "1b79334e-2efd-4ea3-85f5-71d2a00f0468", 00:23:30.003 "assigned_rate_limits": { 00:23:30.003 "rw_ios_per_sec": 0, 00:23:30.003 "rw_mbytes_per_sec": 0, 00:23:30.003 "r_mbytes_per_sec": 0, 00:23:30.003 "w_mbytes_per_sec": 0 00:23:30.003 }, 00:23:30.003 "claimed": true, 00:23:30.003 "claim_type": "exclusive_write", 00:23:30.003 "zoned": false, 00:23:30.003 "supported_io_types": { 00:23:30.003 "read": true, 00:23:30.003 "write": true, 00:23:30.003 "unmap": true, 00:23:30.003 "flush": true, 00:23:30.003 "reset": true, 00:23:30.003 "nvme_admin": false, 00:23:30.003 "nvme_io": false, 00:23:30.003 "nvme_io_md": false, 00:23:30.004 "write_zeroes": true, 00:23:30.004 "zcopy": true, 00:23:30.004 "get_zone_info": false, 00:23:30.004 "zone_management": false, 00:23:30.004 "zone_append": false, 00:23:30.004 "compare": false, 00:23:30.004 "compare_and_write": false, 00:23:30.004 "abort": true, 00:23:30.004 "seek_hole": false, 00:23:30.004 "seek_data": false, 00:23:30.004 "copy": true, 00:23:30.004 "nvme_iov_md": false 00:23:30.004 }, 00:23:30.004 "memory_domains": [ 00:23:30.004 { 00:23:30.004 "dma_device_id": "system", 00:23:30.004 "dma_device_type": 1 00:23:30.004 }, 00:23:30.004 { 00:23:30.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.004 "dma_device_type": 2 00:23:30.004 } 00:23:30.004 ], 00:23:30.004 "driver_specific": {} 00:23:30.004 }' 00:23:30.004 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.261 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.519 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.519 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.519 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.519 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:30.519 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.801 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.801 "name": "BaseBdev4", 00:23:30.801 "aliases": [ 00:23:30.801 "417ec62f-a50a-4d3c-9da7-a3d14d0aca8b" 00:23:30.801 ], 00:23:30.801 "product_name": "Malloc disk", 00:23:30.801 "block_size": 512, 00:23:30.801 "num_blocks": 65536, 00:23:30.801 "uuid": "417ec62f-a50a-4d3c-9da7-a3d14d0aca8b", 00:23:30.801 "assigned_rate_limits": { 00:23:30.801 "rw_ios_per_sec": 0, 00:23:30.801 "rw_mbytes_per_sec": 0, 00:23:30.801 "r_mbytes_per_sec": 0, 00:23:30.801 "w_mbytes_per_sec": 0 00:23:30.801 }, 00:23:30.801 "claimed": true, 00:23:30.801 "claim_type": "exclusive_write", 00:23:30.801 "zoned": false, 00:23:30.801 "supported_io_types": { 00:23:30.801 "read": true, 00:23:30.801 "write": true, 00:23:30.801 "unmap": true, 00:23:30.801 "flush": true, 00:23:30.801 "reset": true, 00:23:30.801 "nvme_admin": false, 00:23:30.801 "nvme_io": false, 00:23:30.801 "nvme_io_md": false, 00:23:30.801 "write_zeroes": true, 00:23:30.801 "zcopy": true, 00:23:30.801 "get_zone_info": false, 00:23:30.801 "zone_management": false, 00:23:30.801 "zone_append": false, 00:23:30.801 "compare": false, 00:23:30.801 "compare_and_write": false, 00:23:30.801 "abort": true, 00:23:30.801 "seek_hole": false, 00:23:30.801 "seek_data": false, 00:23:30.801 "copy": true, 00:23:30.801 "nvme_iov_md": false 00:23:30.801 }, 00:23:30.801 "memory_domains": [ 00:23:30.801 { 00:23:30.801 "dma_device_id": "system", 00:23:30.801 "dma_device_type": 1 00:23:30.801 }, 00:23:30.801 { 00:23:30.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.801 "dma_device_type": 2 00:23:30.801 } 00:23:30.801 ], 00:23:30.801 "driver_specific": {} 00:23:30.801 }' 00:23:30.801 02:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.801 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.801 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.801 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.801 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.801 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.801 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.059 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.059 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:31.059 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.059 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.059 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:31.059 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:31.625 [2024-07-11 02:29:21.920001] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:31.625 [2024-07-11 02:29:21.920028] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.625 [2024-07-11 02:29:21.920073] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.625 02:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.884 02:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.884 "name": "Existed_Raid", 00:23:31.884 "uuid": "34b316ae-5645-4b55-bc9a-905c277806c9", 00:23:31.884 "strip_size_kb": 64, 00:23:31.884 "state": "offline", 00:23:31.884 "raid_level": "concat", 00:23:31.884 "superblock": true, 00:23:31.884 "num_base_bdevs": 4, 00:23:31.884 "num_base_bdevs_discovered": 3, 00:23:31.884 "num_base_bdevs_operational": 3, 00:23:31.884 "base_bdevs_list": [ 00:23:31.884 { 00:23:31.884 "name": null, 00:23:31.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.884 "is_configured": false, 00:23:31.884 "data_offset": 2048, 00:23:31.884 "data_size": 63488 00:23:31.884 }, 00:23:31.884 { 00:23:31.884 "name": "BaseBdev2", 00:23:31.884 "uuid": "b4dbadf7-577b-4559-b798-89176053a18f", 00:23:31.884 "is_configured": true, 00:23:31.884 "data_offset": 2048, 00:23:31.884 "data_size": 63488 00:23:31.884 }, 00:23:31.884 { 00:23:31.884 "name": "BaseBdev3", 00:23:31.884 "uuid": "1b79334e-2efd-4ea3-85f5-71d2a00f0468", 00:23:31.884 "is_configured": true, 00:23:31.884 "data_offset": 2048, 00:23:31.884 "data_size": 63488 00:23:31.884 }, 00:23:31.884 { 00:23:31.884 "name": "BaseBdev4", 00:23:31.884 "uuid": "417ec62f-a50a-4d3c-9da7-a3d14d0aca8b", 00:23:31.884 "is_configured": true, 00:23:31.884 "data_offset": 2048, 00:23:31.884 "data_size": 63488 00:23:31.884 } 00:23:31.884 ] 00:23:31.884 }' 00:23:31.884 02:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.884 02:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.819 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:32.820 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:32.820 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.820 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:33.078 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:33.078 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.078 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:33.339 [2024-07-11 02:29:23.549304] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:33.339 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:33.339 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:33.339 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:33.339 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.597 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:33.597 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.597 02:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:33.855 [2024-07-11 02:29:24.050694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:33.855 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:33.855 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:33.855 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:33.855 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.114 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:34.114 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:34.114 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:34.372 [2024-07-11 02:29:24.542483] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:34.372 [2024-07-11 02:29:24.542523] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15bfd70 name Existed_Raid, state offline 00:23:34.372 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:34.372 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:34.372 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.372 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:34.631 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:34.631 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:34.631 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:34.631 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:34.631 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:34.631 02:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.889 BaseBdev2 00:23:34.889 02:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:34.889 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:34.889 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:34.889 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:34.889 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:34.889 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:34.889 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:35.148 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:35.408 [ 00:23:35.408 { 00:23:35.408 "name": "BaseBdev2", 00:23:35.408 "aliases": [ 00:23:35.408 "7c1b6735-8d71-4125-84e3-9067abffbb24" 00:23:35.408 ], 00:23:35.408 "product_name": "Malloc disk", 00:23:35.408 "block_size": 512, 00:23:35.408 "num_blocks": 65536, 00:23:35.408 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:35.408 "assigned_rate_limits": { 00:23:35.408 "rw_ios_per_sec": 0, 00:23:35.408 "rw_mbytes_per_sec": 0, 00:23:35.408 "r_mbytes_per_sec": 0, 00:23:35.408 "w_mbytes_per_sec": 0 00:23:35.408 }, 00:23:35.408 "claimed": false, 00:23:35.408 "zoned": false, 00:23:35.408 "supported_io_types": { 00:23:35.408 "read": true, 00:23:35.408 "write": true, 00:23:35.408 "unmap": true, 00:23:35.408 "flush": true, 00:23:35.408 "reset": true, 00:23:35.408 "nvme_admin": false, 00:23:35.408 "nvme_io": false, 00:23:35.408 "nvme_io_md": false, 00:23:35.408 "write_zeroes": true, 00:23:35.408 "zcopy": true, 00:23:35.408 "get_zone_info": false, 00:23:35.408 "zone_management": false, 00:23:35.408 "zone_append": false, 00:23:35.408 "compare": false, 00:23:35.408 "compare_and_write": false, 00:23:35.408 "abort": true, 00:23:35.408 "seek_hole": false, 00:23:35.408 "seek_data": false, 00:23:35.408 "copy": true, 00:23:35.408 "nvme_iov_md": false 00:23:35.408 }, 00:23:35.408 "memory_domains": [ 00:23:35.408 { 00:23:35.408 "dma_device_id": "system", 00:23:35.408 "dma_device_type": 1 00:23:35.408 }, 00:23:35.408 { 00:23:35.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.408 "dma_device_type": 2 00:23:35.408 } 00:23:35.408 ], 00:23:35.408 "driver_specific": {} 00:23:35.408 } 00:23:35.408 ] 00:23:35.408 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:35.408 02:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:35.408 02:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:35.408 02:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:35.667 BaseBdev3 00:23:35.667 02:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:35.667 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:35.667 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:35.667 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:35.667 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:35.667 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:35.667 02:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:35.927 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:35.927 [ 00:23:35.927 { 00:23:35.927 "name": "BaseBdev3", 00:23:35.927 "aliases": [ 00:23:35.927 "bcdd15f3-75ff-4d6a-928c-02f98d48af19" 00:23:35.927 ], 00:23:35.927 "product_name": "Malloc disk", 00:23:35.927 "block_size": 512, 00:23:35.927 "num_blocks": 65536, 00:23:35.927 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:35.927 "assigned_rate_limits": { 00:23:35.927 "rw_ios_per_sec": 0, 00:23:35.927 "rw_mbytes_per_sec": 0, 00:23:35.927 "r_mbytes_per_sec": 0, 00:23:35.927 "w_mbytes_per_sec": 0 00:23:35.927 }, 00:23:35.927 "claimed": false, 00:23:35.927 "zoned": false, 00:23:35.927 "supported_io_types": { 00:23:35.927 "read": true, 00:23:35.927 "write": true, 00:23:35.927 "unmap": true, 00:23:35.927 "flush": true, 00:23:35.927 "reset": true, 00:23:35.927 "nvme_admin": false, 00:23:35.927 "nvme_io": false, 00:23:35.927 "nvme_io_md": false, 00:23:35.927 "write_zeroes": true, 00:23:35.927 "zcopy": true, 00:23:35.927 "get_zone_info": false, 00:23:35.927 "zone_management": false, 00:23:35.927 "zone_append": false, 00:23:35.927 "compare": false, 00:23:35.927 "compare_and_write": false, 00:23:35.927 "abort": true, 00:23:35.927 "seek_hole": false, 00:23:35.927 "seek_data": false, 00:23:35.927 "copy": true, 00:23:35.927 "nvme_iov_md": false 00:23:35.927 }, 00:23:35.927 "memory_domains": [ 00:23:35.927 { 00:23:35.927 "dma_device_id": "system", 00:23:35.927 "dma_device_type": 1 00:23:35.927 }, 00:23:35.927 { 00:23:35.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.927 "dma_device_type": 2 00:23:35.927 } 00:23:35.927 ], 00:23:35.927 "driver_specific": {} 00:23:35.927 } 00:23:35.927 ] 00:23:35.927 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:35.927 02:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:35.927 02:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:35.927 02:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:36.218 BaseBdev4 00:23:36.218 02:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:36.218 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:36.218 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:36.218 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:36.218 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:36.218 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:36.218 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.477 02:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:36.738 [ 00:23:36.738 { 00:23:36.738 "name": "BaseBdev4", 00:23:36.738 "aliases": [ 00:23:36.738 "ec594e79-d24e-45a6-ba41-b67dc1ad80c4" 00:23:36.738 ], 00:23:36.738 "product_name": "Malloc disk", 00:23:36.738 "block_size": 512, 00:23:36.738 "num_blocks": 65536, 00:23:36.738 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:36.738 "assigned_rate_limits": { 00:23:36.738 "rw_ios_per_sec": 0, 00:23:36.738 "rw_mbytes_per_sec": 0, 00:23:36.738 "r_mbytes_per_sec": 0, 00:23:36.738 "w_mbytes_per_sec": 0 00:23:36.738 }, 00:23:36.738 "claimed": false, 00:23:36.738 "zoned": false, 00:23:36.738 "supported_io_types": { 00:23:36.738 "read": true, 00:23:36.738 "write": true, 00:23:36.738 "unmap": true, 00:23:36.738 "flush": true, 00:23:36.738 "reset": true, 00:23:36.738 "nvme_admin": false, 00:23:36.738 "nvme_io": false, 00:23:36.738 "nvme_io_md": false, 00:23:36.738 "write_zeroes": true, 00:23:36.738 "zcopy": true, 00:23:36.738 "get_zone_info": false, 00:23:36.738 "zone_management": false, 00:23:36.738 "zone_append": false, 00:23:36.738 "compare": false, 00:23:36.738 "compare_and_write": false, 00:23:36.738 "abort": true, 00:23:36.738 "seek_hole": false, 00:23:36.738 "seek_data": false, 00:23:36.738 "copy": true, 00:23:36.738 "nvme_iov_md": false 00:23:36.738 }, 00:23:36.738 "memory_domains": [ 00:23:36.738 { 00:23:36.738 "dma_device_id": "system", 00:23:36.738 "dma_device_type": 1 00:23:36.738 }, 00:23:36.738 { 00:23:36.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.738 "dma_device_type": 2 00:23:36.738 } 00:23:36.738 ], 00:23:36.738 "driver_specific": {} 00:23:36.738 } 00:23:36.738 ] 00:23:36.738 02:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:36.738 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:36.738 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:36.738 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:37.041 [2024-07-11 02:29:27.293480] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:37.041 [2024-07-11 02:29:27.293520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:37.041 [2024-07-11 02:29:27.293541] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:37.041 [2024-07-11 02:29:27.294868] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:37.041 [2024-07-11 02:29:27.294909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.041 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.302 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.302 "name": "Existed_Raid", 00:23:37.302 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:37.302 "strip_size_kb": 64, 00:23:37.302 "state": "configuring", 00:23:37.302 "raid_level": "concat", 00:23:37.302 "superblock": true, 00:23:37.302 "num_base_bdevs": 4, 00:23:37.302 "num_base_bdevs_discovered": 3, 00:23:37.302 "num_base_bdevs_operational": 4, 00:23:37.302 "base_bdevs_list": [ 00:23:37.302 { 00:23:37.302 "name": "BaseBdev1", 00:23:37.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.302 "is_configured": false, 00:23:37.302 "data_offset": 0, 00:23:37.302 "data_size": 0 00:23:37.302 }, 00:23:37.302 { 00:23:37.302 "name": "BaseBdev2", 00:23:37.302 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:37.302 "is_configured": true, 00:23:37.302 "data_offset": 2048, 00:23:37.302 "data_size": 63488 00:23:37.302 }, 00:23:37.302 { 00:23:37.302 "name": "BaseBdev3", 00:23:37.302 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:37.302 "is_configured": true, 00:23:37.302 "data_offset": 2048, 00:23:37.302 "data_size": 63488 00:23:37.302 }, 00:23:37.302 { 00:23:37.302 "name": "BaseBdev4", 00:23:37.302 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:37.302 "is_configured": true, 00:23:37.302 "data_offset": 2048, 00:23:37.302 "data_size": 63488 00:23:37.302 } 00:23:37.302 ] 00:23:37.302 }' 00:23:37.302 02:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.302 02:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.239 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:38.498 [2024-07-11 02:29:28.793437] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.498 02:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.757 02:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.757 "name": "Existed_Raid", 00:23:38.757 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:38.757 "strip_size_kb": 64, 00:23:38.757 "state": "configuring", 00:23:38.757 "raid_level": "concat", 00:23:38.757 "superblock": true, 00:23:38.757 "num_base_bdevs": 4, 00:23:38.757 "num_base_bdevs_discovered": 2, 00:23:38.757 "num_base_bdevs_operational": 4, 00:23:38.757 "base_bdevs_list": [ 00:23:38.757 { 00:23:38.757 "name": "BaseBdev1", 00:23:38.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.757 "is_configured": false, 00:23:38.757 "data_offset": 0, 00:23:38.757 "data_size": 0 00:23:38.757 }, 00:23:38.757 { 00:23:38.757 "name": null, 00:23:38.757 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:38.757 "is_configured": false, 00:23:38.757 "data_offset": 2048, 00:23:38.757 "data_size": 63488 00:23:38.757 }, 00:23:38.757 { 00:23:38.757 "name": "BaseBdev3", 00:23:38.757 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:38.757 "is_configured": true, 00:23:38.757 "data_offset": 2048, 00:23:38.757 "data_size": 63488 00:23:38.757 }, 00:23:38.757 { 00:23:38.757 "name": "BaseBdev4", 00:23:38.757 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:38.757 "is_configured": true, 00:23:38.757 "data_offset": 2048, 00:23:38.757 "data_size": 63488 00:23:38.757 } 00:23:38.757 ] 00:23:38.757 }' 00:23:38.757 02:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.757 02:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.692 02:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.692 02:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:39.951 02:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:39.951 02:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:40.209 [2024-07-11 02:29:30.589573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.209 BaseBdev1 00:23:40.209 02:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:40.209 02:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:40.209 02:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:40.209 02:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:40.209 02:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:40.209 02:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:40.209 02:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:40.468 02:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:40.727 [ 00:23:40.727 { 00:23:40.727 "name": "BaseBdev1", 00:23:40.727 "aliases": [ 00:23:40.727 "97bdf844-570b-4931-ad5e-3841228eb579" 00:23:40.727 ], 00:23:40.727 "product_name": "Malloc disk", 00:23:40.727 "block_size": 512, 00:23:40.727 "num_blocks": 65536, 00:23:40.727 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:40.727 "assigned_rate_limits": { 00:23:40.727 "rw_ios_per_sec": 0, 00:23:40.727 "rw_mbytes_per_sec": 0, 00:23:40.727 "r_mbytes_per_sec": 0, 00:23:40.727 "w_mbytes_per_sec": 0 00:23:40.727 }, 00:23:40.727 "claimed": true, 00:23:40.727 "claim_type": "exclusive_write", 00:23:40.727 "zoned": false, 00:23:40.727 "supported_io_types": { 00:23:40.727 "read": true, 00:23:40.727 "write": true, 00:23:40.727 "unmap": true, 00:23:40.727 "flush": true, 00:23:40.727 "reset": true, 00:23:40.727 "nvme_admin": false, 00:23:40.727 "nvme_io": false, 00:23:40.727 "nvme_io_md": false, 00:23:40.727 "write_zeroes": true, 00:23:40.727 "zcopy": true, 00:23:40.727 "get_zone_info": false, 00:23:40.727 "zone_management": false, 00:23:40.727 "zone_append": false, 00:23:40.727 "compare": false, 00:23:40.727 "compare_and_write": false, 00:23:40.727 "abort": true, 00:23:40.727 "seek_hole": false, 00:23:40.727 "seek_data": false, 00:23:40.727 "copy": true, 00:23:40.727 "nvme_iov_md": false 00:23:40.727 }, 00:23:40.727 "memory_domains": [ 00:23:40.727 { 00:23:40.727 "dma_device_id": "system", 00:23:40.727 "dma_device_type": 1 00:23:40.727 }, 00:23:40.727 { 00:23:40.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.727 "dma_device_type": 2 00:23:40.727 } 00:23:40.727 ], 00:23:40.727 "driver_specific": {} 00:23:40.727 } 00:23:40.727 ] 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.728 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.987 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.987 "name": "Existed_Raid", 00:23:40.987 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:40.987 "strip_size_kb": 64, 00:23:40.987 "state": "configuring", 00:23:40.987 "raid_level": "concat", 00:23:40.987 "superblock": true, 00:23:40.987 "num_base_bdevs": 4, 00:23:40.987 "num_base_bdevs_discovered": 3, 00:23:40.987 "num_base_bdevs_operational": 4, 00:23:40.987 "base_bdevs_list": [ 00:23:40.987 { 00:23:40.987 "name": "BaseBdev1", 00:23:40.987 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:40.987 "is_configured": true, 00:23:40.987 "data_offset": 2048, 00:23:40.987 "data_size": 63488 00:23:40.987 }, 00:23:40.987 { 00:23:40.987 "name": null, 00:23:40.987 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:40.987 "is_configured": false, 00:23:40.987 "data_offset": 2048, 00:23:40.987 "data_size": 63488 00:23:40.987 }, 00:23:40.987 { 00:23:40.987 "name": "BaseBdev3", 00:23:40.987 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:40.987 "is_configured": true, 00:23:40.987 "data_offset": 2048, 00:23:40.987 "data_size": 63488 00:23:40.987 }, 00:23:40.987 { 00:23:40.987 "name": "BaseBdev4", 00:23:40.987 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:40.987 "is_configured": true, 00:23:40.987 "data_offset": 2048, 00:23:40.987 "data_size": 63488 00:23:40.987 } 00:23:40.987 ] 00:23:40.987 }' 00:23:40.987 02:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.987 02:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.924 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.924 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:42.183 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:42.183 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:42.441 [2024-07-11 02:29:32.731318] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.441 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.442 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.442 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.442 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.442 02:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.009 02:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.010 "name": "Existed_Raid", 00:23:43.010 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:43.010 "strip_size_kb": 64, 00:23:43.010 "state": "configuring", 00:23:43.010 "raid_level": "concat", 00:23:43.010 "superblock": true, 00:23:43.010 "num_base_bdevs": 4, 00:23:43.010 "num_base_bdevs_discovered": 2, 00:23:43.010 "num_base_bdevs_operational": 4, 00:23:43.010 "base_bdevs_list": [ 00:23:43.010 { 00:23:43.010 "name": "BaseBdev1", 00:23:43.010 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:43.010 "is_configured": true, 00:23:43.010 "data_offset": 2048, 00:23:43.010 "data_size": 63488 00:23:43.010 }, 00:23:43.010 { 00:23:43.010 "name": null, 00:23:43.010 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:43.010 "is_configured": false, 00:23:43.010 "data_offset": 2048, 00:23:43.010 "data_size": 63488 00:23:43.010 }, 00:23:43.010 { 00:23:43.010 "name": null, 00:23:43.010 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:43.010 "is_configured": false, 00:23:43.010 "data_offset": 2048, 00:23:43.010 "data_size": 63488 00:23:43.010 }, 00:23:43.010 { 00:23:43.010 "name": "BaseBdev4", 00:23:43.010 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:43.010 "is_configured": true, 00:23:43.010 "data_offset": 2048, 00:23:43.010 "data_size": 63488 00:23:43.010 } 00:23:43.010 ] 00:23:43.010 }' 00:23:43.010 02:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.010 02:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.945 02:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.945 02:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:44.512 02:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:44.512 02:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:44.771 [2024-07-11 02:29:35.041464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.771 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.338 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.338 "name": "Existed_Raid", 00:23:45.338 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:45.338 "strip_size_kb": 64, 00:23:45.338 "state": "configuring", 00:23:45.338 "raid_level": "concat", 00:23:45.338 "superblock": true, 00:23:45.338 "num_base_bdevs": 4, 00:23:45.338 "num_base_bdevs_discovered": 3, 00:23:45.338 "num_base_bdevs_operational": 4, 00:23:45.338 "base_bdevs_list": [ 00:23:45.338 { 00:23:45.338 "name": "BaseBdev1", 00:23:45.338 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:45.338 "is_configured": true, 00:23:45.338 "data_offset": 2048, 00:23:45.338 "data_size": 63488 00:23:45.338 }, 00:23:45.338 { 00:23:45.338 "name": null, 00:23:45.338 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:45.338 "is_configured": false, 00:23:45.338 "data_offset": 2048, 00:23:45.338 "data_size": 63488 00:23:45.338 }, 00:23:45.338 { 00:23:45.338 "name": "BaseBdev3", 00:23:45.338 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:45.338 "is_configured": true, 00:23:45.338 "data_offset": 2048, 00:23:45.338 "data_size": 63488 00:23:45.338 }, 00:23:45.338 { 00:23:45.338 "name": "BaseBdev4", 00:23:45.338 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:45.338 "is_configured": true, 00:23:45.338 "data_offset": 2048, 00:23:45.338 "data_size": 63488 00:23:45.338 } 00:23:45.338 ] 00:23:45.338 }' 00:23:45.338 02:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.338 02:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.907 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.907 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:46.165 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:46.165 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:46.165 [2024-07-11 02:29:36.573537] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.425 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.684 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.684 "name": "Existed_Raid", 00:23:46.685 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:46.685 "strip_size_kb": 64, 00:23:46.685 "state": "configuring", 00:23:46.685 "raid_level": "concat", 00:23:46.685 "superblock": true, 00:23:46.685 "num_base_bdevs": 4, 00:23:46.685 "num_base_bdevs_discovered": 2, 00:23:46.685 "num_base_bdevs_operational": 4, 00:23:46.685 "base_bdevs_list": [ 00:23:46.685 { 00:23:46.685 "name": null, 00:23:46.685 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:46.685 "is_configured": false, 00:23:46.685 "data_offset": 2048, 00:23:46.685 "data_size": 63488 00:23:46.685 }, 00:23:46.685 { 00:23:46.685 "name": null, 00:23:46.685 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:46.685 "is_configured": false, 00:23:46.685 "data_offset": 2048, 00:23:46.685 "data_size": 63488 00:23:46.685 }, 00:23:46.685 { 00:23:46.685 "name": "BaseBdev3", 00:23:46.685 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:46.685 "is_configured": true, 00:23:46.685 "data_offset": 2048, 00:23:46.685 "data_size": 63488 00:23:46.685 }, 00:23:46.685 { 00:23:46.685 "name": "BaseBdev4", 00:23:46.685 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:46.685 "is_configured": true, 00:23:46.685 "data_offset": 2048, 00:23:46.685 "data_size": 63488 00:23:46.685 } 00:23:46.685 ] 00:23:46.685 }' 00:23:46.685 02:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.685 02:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.253 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.253 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:47.512 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:47.512 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:47.512 [2024-07-11 02:29:37.919585] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.771 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.772 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.772 02:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.339 02:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.339 "name": "Existed_Raid", 00:23:48.339 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:48.339 "strip_size_kb": 64, 00:23:48.339 "state": "configuring", 00:23:48.339 "raid_level": "concat", 00:23:48.339 "superblock": true, 00:23:48.339 "num_base_bdevs": 4, 00:23:48.339 "num_base_bdevs_discovered": 3, 00:23:48.339 "num_base_bdevs_operational": 4, 00:23:48.339 "base_bdevs_list": [ 00:23:48.339 { 00:23:48.339 "name": null, 00:23:48.339 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:48.339 "is_configured": false, 00:23:48.339 "data_offset": 2048, 00:23:48.339 "data_size": 63488 00:23:48.339 }, 00:23:48.339 { 00:23:48.339 "name": "BaseBdev2", 00:23:48.339 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:48.339 "is_configured": true, 00:23:48.339 "data_offset": 2048, 00:23:48.339 "data_size": 63488 00:23:48.339 }, 00:23:48.339 { 00:23:48.339 "name": "BaseBdev3", 00:23:48.339 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:48.339 "is_configured": true, 00:23:48.339 "data_offset": 2048, 00:23:48.339 "data_size": 63488 00:23:48.339 }, 00:23:48.339 { 00:23:48.339 "name": "BaseBdev4", 00:23:48.339 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:48.339 "is_configured": true, 00:23:48.339 "data_offset": 2048, 00:23:48.339 "data_size": 63488 00:23:48.339 } 00:23:48.339 ] 00:23:48.339 }' 00:23:48.339 02:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.339 02:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.275 02:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.275 02:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:49.275 02:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:49.275 02:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.275 02:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:49.843 02:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 97bdf844-570b-4931-ad5e-3841228eb579 00:23:50.103 [2024-07-11 02:29:40.346514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:50.103 [2024-07-11 02:29:40.346669] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1411ba0 00:23:50.103 [2024-07-11 02:29:40.346682] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:50.103 [2024-07-11 02:29:40.346863] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1405e80 00:23:50.103 [2024-07-11 02:29:40.346983] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1411ba0 00:23:50.103 [2024-07-11 02:29:40.346993] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1411ba0 00:23:50.103 [2024-07-11 02:29:40.347087] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.103 NewBaseBdev 00:23:50.103 02:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:50.103 02:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:50.103 02:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:50.103 02:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:50.103 02:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:50.103 02:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:50.103 02:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:50.672 02:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:51.240 [ 00:23:51.240 { 00:23:51.240 "name": "NewBaseBdev", 00:23:51.240 "aliases": [ 00:23:51.240 "97bdf844-570b-4931-ad5e-3841228eb579" 00:23:51.240 ], 00:23:51.240 "product_name": "Malloc disk", 00:23:51.240 "block_size": 512, 00:23:51.240 "num_blocks": 65536, 00:23:51.240 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:51.240 "assigned_rate_limits": { 00:23:51.240 "rw_ios_per_sec": 0, 00:23:51.240 "rw_mbytes_per_sec": 0, 00:23:51.240 "r_mbytes_per_sec": 0, 00:23:51.240 "w_mbytes_per_sec": 0 00:23:51.240 }, 00:23:51.240 "claimed": true, 00:23:51.240 "claim_type": "exclusive_write", 00:23:51.240 "zoned": false, 00:23:51.240 "supported_io_types": { 00:23:51.240 "read": true, 00:23:51.240 "write": true, 00:23:51.240 "unmap": true, 00:23:51.240 "flush": true, 00:23:51.240 "reset": true, 00:23:51.240 "nvme_admin": false, 00:23:51.240 "nvme_io": false, 00:23:51.240 "nvme_io_md": false, 00:23:51.240 "write_zeroes": true, 00:23:51.240 "zcopy": true, 00:23:51.240 "get_zone_info": false, 00:23:51.240 "zone_management": false, 00:23:51.240 "zone_append": false, 00:23:51.240 "compare": false, 00:23:51.240 "compare_and_write": false, 00:23:51.240 "abort": true, 00:23:51.240 "seek_hole": false, 00:23:51.240 "seek_data": false, 00:23:51.240 "copy": true, 00:23:51.240 "nvme_iov_md": false 00:23:51.240 }, 00:23:51.240 "memory_domains": [ 00:23:51.240 { 00:23:51.240 "dma_device_id": "system", 00:23:51.240 "dma_device_type": 1 00:23:51.240 }, 00:23:51.240 { 00:23:51.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.240 "dma_device_type": 2 00:23:51.240 } 00:23:51.240 ], 00:23:51.240 "driver_specific": {} 00:23:51.240 } 00:23:51.240 ] 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.240 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.498 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.498 "name": "Existed_Raid", 00:23:51.498 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:51.498 "strip_size_kb": 64, 00:23:51.498 "state": "online", 00:23:51.498 "raid_level": "concat", 00:23:51.498 "superblock": true, 00:23:51.498 "num_base_bdevs": 4, 00:23:51.498 "num_base_bdevs_discovered": 4, 00:23:51.498 "num_base_bdevs_operational": 4, 00:23:51.498 "base_bdevs_list": [ 00:23:51.498 { 00:23:51.498 "name": "NewBaseBdev", 00:23:51.498 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:51.498 "is_configured": true, 00:23:51.498 "data_offset": 2048, 00:23:51.498 "data_size": 63488 00:23:51.498 }, 00:23:51.498 { 00:23:51.498 "name": "BaseBdev2", 00:23:51.498 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:51.498 "is_configured": true, 00:23:51.498 "data_offset": 2048, 00:23:51.498 "data_size": 63488 00:23:51.498 }, 00:23:51.498 { 00:23:51.498 "name": "BaseBdev3", 00:23:51.498 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:51.498 "is_configured": true, 00:23:51.498 "data_offset": 2048, 00:23:51.498 "data_size": 63488 00:23:51.498 }, 00:23:51.498 { 00:23:51.498 "name": "BaseBdev4", 00:23:51.498 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:51.498 "is_configured": true, 00:23:51.498 "data_offset": 2048, 00:23:51.498 "data_size": 63488 00:23:51.498 } 00:23:51.498 ] 00:23:51.498 }' 00:23:51.498 02:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.498 02:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.432 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:52.432 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:52.432 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:52.432 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:52.432 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:52.432 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:52.433 [2024-07-11 02:29:42.733164] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:52.433 "name": "Existed_Raid", 00:23:52.433 "aliases": [ 00:23:52.433 "5666a473-656c-468e-a7b6-9bcf840dd898" 00:23:52.433 ], 00:23:52.433 "product_name": "Raid Volume", 00:23:52.433 "block_size": 512, 00:23:52.433 "num_blocks": 253952, 00:23:52.433 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:52.433 "assigned_rate_limits": { 00:23:52.433 "rw_ios_per_sec": 0, 00:23:52.433 "rw_mbytes_per_sec": 0, 00:23:52.433 "r_mbytes_per_sec": 0, 00:23:52.433 "w_mbytes_per_sec": 0 00:23:52.433 }, 00:23:52.433 "claimed": false, 00:23:52.433 "zoned": false, 00:23:52.433 "supported_io_types": { 00:23:52.433 "read": true, 00:23:52.433 "write": true, 00:23:52.433 "unmap": true, 00:23:52.433 "flush": true, 00:23:52.433 "reset": true, 00:23:52.433 "nvme_admin": false, 00:23:52.433 "nvme_io": false, 00:23:52.433 "nvme_io_md": false, 00:23:52.433 "write_zeroes": true, 00:23:52.433 "zcopy": false, 00:23:52.433 "get_zone_info": false, 00:23:52.433 "zone_management": false, 00:23:52.433 "zone_append": false, 00:23:52.433 "compare": false, 00:23:52.433 "compare_and_write": false, 00:23:52.433 "abort": false, 00:23:52.433 "seek_hole": false, 00:23:52.433 "seek_data": false, 00:23:52.433 "copy": false, 00:23:52.433 "nvme_iov_md": false 00:23:52.433 }, 00:23:52.433 "memory_domains": [ 00:23:52.433 { 00:23:52.433 "dma_device_id": "system", 00:23:52.433 "dma_device_type": 1 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.433 "dma_device_type": 2 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "dma_device_id": "system", 00:23:52.433 "dma_device_type": 1 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.433 "dma_device_type": 2 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "dma_device_id": "system", 00:23:52.433 "dma_device_type": 1 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.433 "dma_device_type": 2 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "dma_device_id": "system", 00:23:52.433 "dma_device_type": 1 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.433 "dma_device_type": 2 00:23:52.433 } 00:23:52.433 ], 00:23:52.433 "driver_specific": { 00:23:52.433 "raid": { 00:23:52.433 "uuid": "5666a473-656c-468e-a7b6-9bcf840dd898", 00:23:52.433 "strip_size_kb": 64, 00:23:52.433 "state": "online", 00:23:52.433 "raid_level": "concat", 00:23:52.433 "superblock": true, 00:23:52.433 "num_base_bdevs": 4, 00:23:52.433 "num_base_bdevs_discovered": 4, 00:23:52.433 "num_base_bdevs_operational": 4, 00:23:52.433 "base_bdevs_list": [ 00:23:52.433 { 00:23:52.433 "name": "NewBaseBdev", 00:23:52.433 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:52.433 "is_configured": true, 00:23:52.433 "data_offset": 2048, 00:23:52.433 "data_size": 63488 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "name": "BaseBdev2", 00:23:52.433 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:52.433 "is_configured": true, 00:23:52.433 "data_offset": 2048, 00:23:52.433 "data_size": 63488 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "name": "BaseBdev3", 00:23:52.433 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:52.433 "is_configured": true, 00:23:52.433 "data_offset": 2048, 00:23:52.433 "data_size": 63488 00:23:52.433 }, 00:23:52.433 { 00:23:52.433 "name": "BaseBdev4", 00:23:52.433 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:52.433 "is_configured": true, 00:23:52.433 "data_offset": 2048, 00:23:52.433 "data_size": 63488 00:23:52.433 } 00:23:52.433 ] 00:23:52.433 } 00:23:52.433 } 00:23:52.433 }' 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:52.433 BaseBdev2 00:23:52.433 BaseBdev3 00:23:52.433 BaseBdev4' 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:52.433 02:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:52.691 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:52.691 "name": "NewBaseBdev", 00:23:52.691 "aliases": [ 00:23:52.692 "97bdf844-570b-4931-ad5e-3841228eb579" 00:23:52.692 ], 00:23:52.692 "product_name": "Malloc disk", 00:23:52.692 "block_size": 512, 00:23:52.692 "num_blocks": 65536, 00:23:52.692 "uuid": "97bdf844-570b-4931-ad5e-3841228eb579", 00:23:52.692 "assigned_rate_limits": { 00:23:52.692 "rw_ios_per_sec": 0, 00:23:52.692 "rw_mbytes_per_sec": 0, 00:23:52.692 "r_mbytes_per_sec": 0, 00:23:52.692 "w_mbytes_per_sec": 0 00:23:52.692 }, 00:23:52.692 "claimed": true, 00:23:52.692 "claim_type": "exclusive_write", 00:23:52.692 "zoned": false, 00:23:52.692 "supported_io_types": { 00:23:52.692 "read": true, 00:23:52.692 "write": true, 00:23:52.692 "unmap": true, 00:23:52.692 "flush": true, 00:23:52.692 "reset": true, 00:23:52.692 "nvme_admin": false, 00:23:52.692 "nvme_io": false, 00:23:52.692 "nvme_io_md": false, 00:23:52.692 "write_zeroes": true, 00:23:52.692 "zcopy": true, 00:23:52.692 "get_zone_info": false, 00:23:52.692 "zone_management": false, 00:23:52.692 "zone_append": false, 00:23:52.692 "compare": false, 00:23:52.692 "compare_and_write": false, 00:23:52.692 "abort": true, 00:23:52.692 "seek_hole": false, 00:23:52.692 "seek_data": false, 00:23:52.692 "copy": true, 00:23:52.692 "nvme_iov_md": false 00:23:52.692 }, 00:23:52.692 "memory_domains": [ 00:23:52.692 { 00:23:52.692 "dma_device_id": "system", 00:23:52.692 "dma_device_type": 1 00:23:52.692 }, 00:23:52.692 { 00:23:52.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.692 "dma_device_type": 2 00:23:52.692 } 00:23:52.692 ], 00:23:52.692 "driver_specific": {} 00:23:52.692 }' 00:23:52.692 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.692 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:52.950 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.207 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.207 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.207 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:53.207 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.465 "name": "BaseBdev2", 00:23:53.465 "aliases": [ 00:23:53.465 "7c1b6735-8d71-4125-84e3-9067abffbb24" 00:23:53.465 ], 00:23:53.465 "product_name": "Malloc disk", 00:23:53.465 "block_size": 512, 00:23:53.465 "num_blocks": 65536, 00:23:53.465 "uuid": "7c1b6735-8d71-4125-84e3-9067abffbb24", 00:23:53.465 "assigned_rate_limits": { 00:23:53.465 "rw_ios_per_sec": 0, 00:23:53.465 "rw_mbytes_per_sec": 0, 00:23:53.465 "r_mbytes_per_sec": 0, 00:23:53.465 "w_mbytes_per_sec": 0 00:23:53.465 }, 00:23:53.465 "claimed": true, 00:23:53.465 "claim_type": "exclusive_write", 00:23:53.465 "zoned": false, 00:23:53.465 "supported_io_types": { 00:23:53.465 "read": true, 00:23:53.465 "write": true, 00:23:53.465 "unmap": true, 00:23:53.465 "flush": true, 00:23:53.465 "reset": true, 00:23:53.465 "nvme_admin": false, 00:23:53.465 "nvme_io": false, 00:23:53.465 "nvme_io_md": false, 00:23:53.465 "write_zeroes": true, 00:23:53.465 "zcopy": true, 00:23:53.465 "get_zone_info": false, 00:23:53.465 "zone_management": false, 00:23:53.465 "zone_append": false, 00:23:53.465 "compare": false, 00:23:53.465 "compare_and_write": false, 00:23:53.465 "abort": true, 00:23:53.465 "seek_hole": false, 00:23:53.465 "seek_data": false, 00:23:53.465 "copy": true, 00:23:53.465 "nvme_iov_md": false 00:23:53.465 }, 00:23:53.465 "memory_domains": [ 00:23:53.465 { 00:23:53.465 "dma_device_id": "system", 00:23:53.465 "dma_device_type": 1 00:23:53.465 }, 00:23:53.465 { 00:23:53.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.465 "dma_device_type": 2 00:23:53.465 } 00:23:53.465 ], 00:23:53.465 "driver_specific": {} 00:23:53.465 }' 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.465 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.724 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.724 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.724 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.724 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.724 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.724 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:53.724 02:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.982 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.982 "name": "BaseBdev3", 00:23:53.982 "aliases": [ 00:23:53.982 "bcdd15f3-75ff-4d6a-928c-02f98d48af19" 00:23:53.982 ], 00:23:53.982 "product_name": "Malloc disk", 00:23:53.982 "block_size": 512, 00:23:53.982 "num_blocks": 65536, 00:23:53.982 "uuid": "bcdd15f3-75ff-4d6a-928c-02f98d48af19", 00:23:53.982 "assigned_rate_limits": { 00:23:53.982 "rw_ios_per_sec": 0, 00:23:53.982 "rw_mbytes_per_sec": 0, 00:23:53.982 "r_mbytes_per_sec": 0, 00:23:53.982 "w_mbytes_per_sec": 0 00:23:53.982 }, 00:23:53.982 "claimed": true, 00:23:53.982 "claim_type": "exclusive_write", 00:23:53.982 "zoned": false, 00:23:53.982 "supported_io_types": { 00:23:53.982 "read": true, 00:23:53.982 "write": true, 00:23:53.982 "unmap": true, 00:23:53.982 "flush": true, 00:23:53.982 "reset": true, 00:23:53.982 "nvme_admin": false, 00:23:53.982 "nvme_io": false, 00:23:53.982 "nvme_io_md": false, 00:23:53.982 "write_zeroes": true, 00:23:53.982 "zcopy": true, 00:23:53.982 "get_zone_info": false, 00:23:53.982 "zone_management": false, 00:23:53.982 "zone_append": false, 00:23:53.982 "compare": false, 00:23:53.982 "compare_and_write": false, 00:23:53.982 "abort": true, 00:23:53.982 "seek_hole": false, 00:23:53.982 "seek_data": false, 00:23:53.982 "copy": true, 00:23:53.982 "nvme_iov_md": false 00:23:53.982 }, 00:23:53.982 "memory_domains": [ 00:23:53.982 { 00:23:53.982 "dma_device_id": "system", 00:23:53.982 "dma_device_type": 1 00:23:53.982 }, 00:23:53.982 { 00:23:53.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.982 "dma_device_type": 2 00:23:53.982 } 00:23:53.982 ], 00:23:53.982 "driver_specific": {} 00:23:53.982 }' 00:23:53.982 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.982 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.982 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.982 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.982 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:54.240 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:54.497 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:54.497 "name": "BaseBdev4", 00:23:54.497 "aliases": [ 00:23:54.497 "ec594e79-d24e-45a6-ba41-b67dc1ad80c4" 00:23:54.497 ], 00:23:54.497 "product_name": "Malloc disk", 00:23:54.497 "block_size": 512, 00:23:54.497 "num_blocks": 65536, 00:23:54.497 "uuid": "ec594e79-d24e-45a6-ba41-b67dc1ad80c4", 00:23:54.497 "assigned_rate_limits": { 00:23:54.497 "rw_ios_per_sec": 0, 00:23:54.497 "rw_mbytes_per_sec": 0, 00:23:54.497 "r_mbytes_per_sec": 0, 00:23:54.497 "w_mbytes_per_sec": 0 00:23:54.497 }, 00:23:54.497 "claimed": true, 00:23:54.497 "claim_type": "exclusive_write", 00:23:54.497 "zoned": false, 00:23:54.497 "supported_io_types": { 00:23:54.497 "read": true, 00:23:54.497 "write": true, 00:23:54.497 "unmap": true, 00:23:54.497 "flush": true, 00:23:54.497 "reset": true, 00:23:54.497 "nvme_admin": false, 00:23:54.497 "nvme_io": false, 00:23:54.497 "nvme_io_md": false, 00:23:54.497 "write_zeroes": true, 00:23:54.497 "zcopy": true, 00:23:54.497 "get_zone_info": false, 00:23:54.497 "zone_management": false, 00:23:54.497 "zone_append": false, 00:23:54.497 "compare": false, 00:23:54.497 "compare_and_write": false, 00:23:54.497 "abort": true, 00:23:54.497 "seek_hole": false, 00:23:54.497 "seek_data": false, 00:23:54.497 "copy": true, 00:23:54.497 "nvme_iov_md": false 00:23:54.497 }, 00:23:54.497 "memory_domains": [ 00:23:54.497 { 00:23:54.497 "dma_device_id": "system", 00:23:54.497 "dma_device_type": 1 00:23:54.497 }, 00:23:54.497 { 00:23:54.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.497 "dma_device_type": 2 00:23:54.497 } 00:23:54.497 ], 00:23:54.497 "driver_specific": {} 00:23:54.497 }' 00:23:54.497 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.497 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.497 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:54.497 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.755 02:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.755 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:54.755 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.755 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.755 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.755 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.755 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:55.013 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:55.013 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:55.013 [2024-07-11 02:29:45.411977] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:55.013 [2024-07-11 02:29:45.412003] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.013 [2024-07-11 02:29:45.412054] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.013 [2024-07-11 02:29:45.412114] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.013 [2024-07-11 02:29:45.412127] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1411ba0 name Existed_Raid, state offline 00:23:55.013 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1978387 00:23:55.013 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1978387 ']' 00:23:55.013 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1978387 00:23:55.013 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:23:55.272 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.272 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1978387 00:23:55.272 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.272 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.272 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1978387' 00:23:55.272 killing process with pid 1978387 00:23:55.272 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1978387 00:23:55.272 [2024-07-11 02:29:45.477541] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:55.272 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1978387 00:23:55.272 [2024-07-11 02:29:45.518083] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.531 02:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:55.531 00:23:55.531 real 0m37.235s 00:23:55.531 user 1m8.562s 00:23:55.531 sys 0m6.469s 00:23:55.532 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.532 02:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.532 ************************************ 00:23:55.532 END TEST raid_state_function_test_sb 00:23:55.532 ************************************ 00:23:55.532 02:29:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:55.532 02:29:45 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:23:55.532 02:29:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:55.532 02:29:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.532 02:29:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.532 ************************************ 00:23:55.532 START TEST raid_superblock_test 00:23:55.532 ************************************ 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1983835 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1983835 /var/tmp/spdk-raid.sock 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1983835 ']' 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:55.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.532 02:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.532 [2024-07-11 02:29:45.914422] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:55.532 [2024-07-11 02:29:45.914556] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983835 ] 00:23:55.791 [2024-07-11 02:29:46.127155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.791 [2024-07-11 02:29:46.178371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.049 [2024-07-11 02:29:46.252200] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.049 [2024-07-11 02:29:46.252235] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:56.619 02:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:56.877 malloc1 00:23:56.877 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:56.877 [2024-07-11 02:29:47.287496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:56.877 [2024-07-11 02:29:47.287540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.877 [2024-07-11 02:29:47.287562] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14bede0 00:23:56.877 [2024-07-11 02:29:47.287580] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.877 [2024-07-11 02:29:47.289268] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.877 [2024-07-11 02:29:47.289296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:56.877 pt1 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:57.136 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:57.395 malloc2 00:23:57.654 02:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:57.654 [2024-07-11 02:29:48.051568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:57.654 [2024-07-11 02:29:48.051615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.654 [2024-07-11 02:29:48.051634] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b6380 00:23:57.654 [2024-07-11 02:29:48.051646] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.654 [2024-07-11 02:29:48.053203] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.654 [2024-07-11 02:29:48.053233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:57.654 pt2 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:57.654 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:57.913 malloc3 00:23:57.913 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:58.172 [2024-07-11 02:29:48.550594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:58.172 [2024-07-11 02:29:48.550637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.172 [2024-07-11 02:29:48.550654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b8fb0 00:23:58.172 [2024-07-11 02:29:48.550666] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.172 [2024-07-11 02:29:48.552182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.172 [2024-07-11 02:29:48.552212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:58.172 pt3 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:58.172 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:58.431 malloc4 00:23:58.431 02:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:58.690 [2024-07-11 02:29:49.048604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:58.690 [2024-07-11 02:29:49.048650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.690 [2024-07-11 02:29:49.048672] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14ba760 00:23:58.690 [2024-07-11 02:29:49.048685] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.690 [2024-07-11 02:29:49.050194] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.690 [2024-07-11 02:29:49.050223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:58.690 pt4 00:23:58.690 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:58.690 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:58.690 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:58.948 [2024-07-11 02:29:49.293285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:58.949 [2024-07-11 02:29:49.294573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:58.949 [2024-07-11 02:29:49.294627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:58.949 [2024-07-11 02:29:49.294669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:58.949 [2024-07-11 02:29:49.294846] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b9cc0 00:23:58.949 [2024-07-11 02:29:49.294858] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:58.949 [2024-07-11 02:29:49.295052] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14bbd00 00:23:58.949 [2024-07-11 02:29:49.295194] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b9cc0 00:23:58.949 [2024-07-11 02:29:49.295206] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14b9cc0 00:23:58.949 [2024-07-11 02:29:49.295302] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.949 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.208 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.208 "name": "raid_bdev1", 00:23:59.208 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:23:59.208 "strip_size_kb": 64, 00:23:59.208 "state": "online", 00:23:59.208 "raid_level": "concat", 00:23:59.208 "superblock": true, 00:23:59.208 "num_base_bdevs": 4, 00:23:59.208 "num_base_bdevs_discovered": 4, 00:23:59.208 "num_base_bdevs_operational": 4, 00:23:59.208 "base_bdevs_list": [ 00:23:59.208 { 00:23:59.208 "name": "pt1", 00:23:59.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:59.208 "is_configured": true, 00:23:59.208 "data_offset": 2048, 00:23:59.208 "data_size": 63488 00:23:59.208 }, 00:23:59.208 { 00:23:59.208 "name": "pt2", 00:23:59.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.208 "is_configured": true, 00:23:59.208 "data_offset": 2048, 00:23:59.208 "data_size": 63488 00:23:59.208 }, 00:23:59.208 { 00:23:59.208 "name": "pt3", 00:23:59.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:59.208 "is_configured": true, 00:23:59.208 "data_offset": 2048, 00:23:59.208 "data_size": 63488 00:23:59.208 }, 00:23:59.208 { 00:23:59.208 "name": "pt4", 00:23:59.208 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:59.208 "is_configured": true, 00:23:59.208 "data_offset": 2048, 00:23:59.208 "data_size": 63488 00:23:59.208 } 00:23:59.208 ] 00:23:59.208 }' 00:23:59.208 02:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.208 02:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:59.774 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:00.032 [2024-07-11 02:29:50.380453] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.032 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:00.032 "name": "raid_bdev1", 00:24:00.032 "aliases": [ 00:24:00.032 "8ee24ae7-6c77-41f7-b2fd-215a07f4e301" 00:24:00.032 ], 00:24:00.032 "product_name": "Raid Volume", 00:24:00.032 "block_size": 512, 00:24:00.032 "num_blocks": 253952, 00:24:00.032 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:24:00.032 "assigned_rate_limits": { 00:24:00.032 "rw_ios_per_sec": 0, 00:24:00.032 "rw_mbytes_per_sec": 0, 00:24:00.032 "r_mbytes_per_sec": 0, 00:24:00.032 "w_mbytes_per_sec": 0 00:24:00.032 }, 00:24:00.032 "claimed": false, 00:24:00.032 "zoned": false, 00:24:00.032 "supported_io_types": { 00:24:00.032 "read": true, 00:24:00.032 "write": true, 00:24:00.032 "unmap": true, 00:24:00.032 "flush": true, 00:24:00.032 "reset": true, 00:24:00.032 "nvme_admin": false, 00:24:00.032 "nvme_io": false, 00:24:00.032 "nvme_io_md": false, 00:24:00.032 "write_zeroes": true, 00:24:00.032 "zcopy": false, 00:24:00.032 "get_zone_info": false, 00:24:00.033 "zone_management": false, 00:24:00.033 "zone_append": false, 00:24:00.033 "compare": false, 00:24:00.033 "compare_and_write": false, 00:24:00.033 "abort": false, 00:24:00.033 "seek_hole": false, 00:24:00.033 "seek_data": false, 00:24:00.033 "copy": false, 00:24:00.033 "nvme_iov_md": false 00:24:00.033 }, 00:24:00.033 "memory_domains": [ 00:24:00.033 { 00:24:00.033 "dma_device_id": "system", 00:24:00.033 "dma_device_type": 1 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.033 "dma_device_type": 2 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "dma_device_id": "system", 00:24:00.033 "dma_device_type": 1 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.033 "dma_device_type": 2 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "dma_device_id": "system", 00:24:00.033 "dma_device_type": 1 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.033 "dma_device_type": 2 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "dma_device_id": "system", 00:24:00.033 "dma_device_type": 1 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.033 "dma_device_type": 2 00:24:00.033 } 00:24:00.033 ], 00:24:00.033 "driver_specific": { 00:24:00.033 "raid": { 00:24:00.033 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:24:00.033 "strip_size_kb": 64, 00:24:00.033 "state": "online", 00:24:00.033 "raid_level": "concat", 00:24:00.033 "superblock": true, 00:24:00.033 "num_base_bdevs": 4, 00:24:00.033 "num_base_bdevs_discovered": 4, 00:24:00.033 "num_base_bdevs_operational": 4, 00:24:00.033 "base_bdevs_list": [ 00:24:00.033 { 00:24:00.033 "name": "pt1", 00:24:00.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.033 "is_configured": true, 00:24:00.033 "data_offset": 2048, 00:24:00.033 "data_size": 63488 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "name": "pt2", 00:24:00.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.033 "is_configured": true, 00:24:00.033 "data_offset": 2048, 00:24:00.033 "data_size": 63488 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "name": "pt3", 00:24:00.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:00.033 "is_configured": true, 00:24:00.033 "data_offset": 2048, 00:24:00.033 "data_size": 63488 00:24:00.033 }, 00:24:00.033 { 00:24:00.033 "name": "pt4", 00:24:00.033 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:00.033 "is_configured": true, 00:24:00.033 "data_offset": 2048, 00:24:00.033 "data_size": 63488 00:24:00.033 } 00:24:00.033 ] 00:24:00.033 } 00:24:00.033 } 00:24:00.033 }' 00:24:00.033 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:00.033 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:00.033 pt2 00:24:00.033 pt3 00:24:00.033 pt4' 00:24:00.033 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:00.033 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:00.033 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:00.293 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:00.293 "name": "pt1", 00:24:00.293 "aliases": [ 00:24:00.293 "00000000-0000-0000-0000-000000000001" 00:24:00.293 ], 00:24:00.293 "product_name": "passthru", 00:24:00.293 "block_size": 512, 00:24:00.293 "num_blocks": 65536, 00:24:00.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.293 "assigned_rate_limits": { 00:24:00.293 "rw_ios_per_sec": 0, 00:24:00.293 "rw_mbytes_per_sec": 0, 00:24:00.293 "r_mbytes_per_sec": 0, 00:24:00.293 "w_mbytes_per_sec": 0 00:24:00.293 }, 00:24:00.293 "claimed": true, 00:24:00.293 "claim_type": "exclusive_write", 00:24:00.293 "zoned": false, 00:24:00.293 "supported_io_types": { 00:24:00.293 "read": true, 00:24:00.293 "write": true, 00:24:00.293 "unmap": true, 00:24:00.293 "flush": true, 00:24:00.293 "reset": true, 00:24:00.293 "nvme_admin": false, 00:24:00.293 "nvme_io": false, 00:24:00.293 "nvme_io_md": false, 00:24:00.293 "write_zeroes": true, 00:24:00.293 "zcopy": true, 00:24:00.293 "get_zone_info": false, 00:24:00.293 "zone_management": false, 00:24:00.293 "zone_append": false, 00:24:00.293 "compare": false, 00:24:00.293 "compare_and_write": false, 00:24:00.293 "abort": true, 00:24:00.293 "seek_hole": false, 00:24:00.293 "seek_data": false, 00:24:00.293 "copy": true, 00:24:00.293 "nvme_iov_md": false 00:24:00.293 }, 00:24:00.293 "memory_domains": [ 00:24:00.293 { 00:24:00.293 "dma_device_id": "system", 00:24:00.293 "dma_device_type": 1 00:24:00.293 }, 00:24:00.293 { 00:24:00.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.293 "dma_device_type": 2 00:24:00.293 } 00:24:00.293 ], 00:24:00.293 "driver_specific": { 00:24:00.293 "passthru": { 00:24:00.293 "name": "pt1", 00:24:00.293 "base_bdev_name": "malloc1" 00:24:00.293 } 00:24:00.293 } 00:24:00.293 }' 00:24:00.293 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.552 02:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.810 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:00.810 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:00.810 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:00.810 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:01.069 "name": "pt2", 00:24:01.069 "aliases": [ 00:24:01.069 "00000000-0000-0000-0000-000000000002" 00:24:01.069 ], 00:24:01.069 "product_name": "passthru", 00:24:01.069 "block_size": 512, 00:24:01.069 "num_blocks": 65536, 00:24:01.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:01.069 "assigned_rate_limits": { 00:24:01.069 "rw_ios_per_sec": 0, 00:24:01.069 "rw_mbytes_per_sec": 0, 00:24:01.069 "r_mbytes_per_sec": 0, 00:24:01.069 "w_mbytes_per_sec": 0 00:24:01.069 }, 00:24:01.069 "claimed": true, 00:24:01.069 "claim_type": "exclusive_write", 00:24:01.069 "zoned": false, 00:24:01.069 "supported_io_types": { 00:24:01.069 "read": true, 00:24:01.069 "write": true, 00:24:01.069 "unmap": true, 00:24:01.069 "flush": true, 00:24:01.069 "reset": true, 00:24:01.069 "nvme_admin": false, 00:24:01.069 "nvme_io": false, 00:24:01.069 "nvme_io_md": false, 00:24:01.069 "write_zeroes": true, 00:24:01.069 "zcopy": true, 00:24:01.069 "get_zone_info": false, 00:24:01.069 "zone_management": false, 00:24:01.069 "zone_append": false, 00:24:01.069 "compare": false, 00:24:01.069 "compare_and_write": false, 00:24:01.069 "abort": true, 00:24:01.069 "seek_hole": false, 00:24:01.069 "seek_data": false, 00:24:01.069 "copy": true, 00:24:01.069 "nvme_iov_md": false 00:24:01.069 }, 00:24:01.069 "memory_domains": [ 00:24:01.069 { 00:24:01.069 "dma_device_id": "system", 00:24:01.069 "dma_device_type": 1 00:24:01.069 }, 00:24:01.069 { 00:24:01.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.069 "dma_device_type": 2 00:24:01.069 } 00:24:01.069 ], 00:24:01.069 "driver_specific": { 00:24:01.069 "passthru": { 00:24:01.069 "name": "pt2", 00:24:01.069 "base_bdev_name": "malloc2" 00:24:01.069 } 00:24:01.069 } 00:24:01.069 }' 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.069 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.328 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.328 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.328 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.328 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:01.328 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:01.328 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:01.328 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:01.601 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:01.601 "name": "pt3", 00:24:01.601 "aliases": [ 00:24:01.601 "00000000-0000-0000-0000-000000000003" 00:24:01.601 ], 00:24:01.601 "product_name": "passthru", 00:24:01.601 "block_size": 512, 00:24:01.601 "num_blocks": 65536, 00:24:01.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:01.601 "assigned_rate_limits": { 00:24:01.601 "rw_ios_per_sec": 0, 00:24:01.601 "rw_mbytes_per_sec": 0, 00:24:01.601 "r_mbytes_per_sec": 0, 00:24:01.601 "w_mbytes_per_sec": 0 00:24:01.601 }, 00:24:01.601 "claimed": true, 00:24:01.601 "claim_type": "exclusive_write", 00:24:01.601 "zoned": false, 00:24:01.601 "supported_io_types": { 00:24:01.601 "read": true, 00:24:01.601 "write": true, 00:24:01.601 "unmap": true, 00:24:01.601 "flush": true, 00:24:01.601 "reset": true, 00:24:01.601 "nvme_admin": false, 00:24:01.601 "nvme_io": false, 00:24:01.601 "nvme_io_md": false, 00:24:01.601 "write_zeroes": true, 00:24:01.601 "zcopy": true, 00:24:01.601 "get_zone_info": false, 00:24:01.601 "zone_management": false, 00:24:01.601 "zone_append": false, 00:24:01.601 "compare": false, 00:24:01.601 "compare_and_write": false, 00:24:01.601 "abort": true, 00:24:01.601 "seek_hole": false, 00:24:01.601 "seek_data": false, 00:24:01.601 "copy": true, 00:24:01.601 "nvme_iov_md": false 00:24:01.601 }, 00:24:01.601 "memory_domains": [ 00:24:01.601 { 00:24:01.601 "dma_device_id": "system", 00:24:01.601 "dma_device_type": 1 00:24:01.601 }, 00:24:01.601 { 00:24:01.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.601 "dma_device_type": 2 00:24:01.601 } 00:24:01.601 ], 00:24:01.601 "driver_specific": { 00:24:01.601 "passthru": { 00:24:01.601 "name": "pt3", 00:24:01.601 "base_bdev_name": "malloc3" 00:24:01.601 } 00:24:01.601 } 00:24:01.601 }' 00:24:01.601 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.601 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.601 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.601 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.601 02:29:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.601 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.601 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:01.872 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:02.130 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:02.131 "name": "pt4", 00:24:02.131 "aliases": [ 00:24:02.131 "00000000-0000-0000-0000-000000000004" 00:24:02.131 ], 00:24:02.131 "product_name": "passthru", 00:24:02.131 "block_size": 512, 00:24:02.131 "num_blocks": 65536, 00:24:02.131 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:02.131 "assigned_rate_limits": { 00:24:02.131 "rw_ios_per_sec": 0, 00:24:02.131 "rw_mbytes_per_sec": 0, 00:24:02.131 "r_mbytes_per_sec": 0, 00:24:02.131 "w_mbytes_per_sec": 0 00:24:02.131 }, 00:24:02.131 "claimed": true, 00:24:02.131 "claim_type": "exclusive_write", 00:24:02.131 "zoned": false, 00:24:02.131 "supported_io_types": { 00:24:02.131 "read": true, 00:24:02.131 "write": true, 00:24:02.131 "unmap": true, 00:24:02.131 "flush": true, 00:24:02.131 "reset": true, 00:24:02.131 "nvme_admin": false, 00:24:02.131 "nvme_io": false, 00:24:02.131 "nvme_io_md": false, 00:24:02.131 "write_zeroes": true, 00:24:02.131 "zcopy": true, 00:24:02.131 "get_zone_info": false, 00:24:02.131 "zone_management": false, 00:24:02.131 "zone_append": false, 00:24:02.131 "compare": false, 00:24:02.131 "compare_and_write": false, 00:24:02.131 "abort": true, 00:24:02.131 "seek_hole": false, 00:24:02.131 "seek_data": false, 00:24:02.131 "copy": true, 00:24:02.131 "nvme_iov_md": false 00:24:02.131 }, 00:24:02.131 "memory_domains": [ 00:24:02.131 { 00:24:02.131 "dma_device_id": "system", 00:24:02.131 "dma_device_type": 1 00:24:02.131 }, 00:24:02.131 { 00:24:02.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.131 "dma_device_type": 2 00:24:02.131 } 00:24:02.131 ], 00:24:02.131 "driver_specific": { 00:24:02.131 "passthru": { 00:24:02.131 "name": "pt4", 00:24:02.131 "base_bdev_name": "malloc4" 00:24:02.131 } 00:24:02.131 } 00:24:02.131 }' 00:24:02.131 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.131 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:02.389 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.648 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.648 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:02.648 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:02.648 02:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:02.908 [2024-07-11 02:29:53.075598] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:02.908 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8ee24ae7-6c77-41f7-b2fd-215a07f4e301 00:24:02.908 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8ee24ae7-6c77-41f7-b2fd-215a07f4e301 ']' 00:24:02.908 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:02.908 [2024-07-11 02:29:53.327962] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:02.908 [2024-07-11 02:29:53.327985] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:02.908 [2024-07-11 02:29:53.328035] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.908 [2024-07-11 02:29:53.328094] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:02.908 [2024-07-11 02:29:53.328106] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b9cc0 name raid_bdev1, state offline 00:24:03.167 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.167 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:03.424 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:03.424 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:03.424 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.424 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:03.424 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.424 02:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:03.682 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.682 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:03.942 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.942 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:04.200 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:04.200 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:04.459 02:29:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:04.717 [2024-07-11 02:29:55.048458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:04.717 [2024-07-11 02:29:55.049770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:04.717 [2024-07-11 02:29:55.049813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:04.717 [2024-07-11 02:29:55.049848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:04.717 [2024-07-11 02:29:55.049892] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:04.717 [2024-07-11 02:29:55.049932] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:04.717 [2024-07-11 02:29:55.049954] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:04.717 [2024-07-11 02:29:55.049976] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:04.717 [2024-07-11 02:29:55.049994] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.717 [2024-07-11 02:29:55.050004] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x130e570 name raid_bdev1, state configuring 00:24:04.717 request: 00:24:04.717 { 00:24:04.717 "name": "raid_bdev1", 00:24:04.717 "raid_level": "concat", 00:24:04.717 "base_bdevs": [ 00:24:04.717 "malloc1", 00:24:04.717 "malloc2", 00:24:04.717 "malloc3", 00:24:04.717 "malloc4" 00:24:04.717 ], 00:24:04.717 "strip_size_kb": 64, 00:24:04.717 "superblock": false, 00:24:04.717 "method": "bdev_raid_create", 00:24:04.717 "req_id": 1 00:24:04.717 } 00:24:04.717 Got JSON-RPC error response 00:24:04.717 response: 00:24:04.717 { 00:24:04.717 "code": -17, 00:24:04.717 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:04.717 } 00:24:04.717 02:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:04.717 02:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:04.717 02:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:04.717 02:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:04.717 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.717 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:04.975 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:04.975 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:04.975 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:05.233 [2024-07-11 02:29:55.541692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:05.233 [2024-07-11 02:29:55.541740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.233 [2024-07-11 02:29:55.541774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b65b0 00:24:05.233 [2024-07-11 02:29:55.541795] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.233 [2024-07-11 02:29:55.543400] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.233 [2024-07-11 02:29:55.543430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:05.233 [2024-07-11 02:29:55.543500] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:05.233 [2024-07-11 02:29:55.543525] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:05.233 pt1 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.233 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.491 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.491 "name": "raid_bdev1", 00:24:05.491 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:24:05.491 "strip_size_kb": 64, 00:24:05.491 "state": "configuring", 00:24:05.491 "raid_level": "concat", 00:24:05.491 "superblock": true, 00:24:05.491 "num_base_bdevs": 4, 00:24:05.491 "num_base_bdevs_discovered": 1, 00:24:05.491 "num_base_bdevs_operational": 4, 00:24:05.491 "base_bdevs_list": [ 00:24:05.491 { 00:24:05.491 "name": "pt1", 00:24:05.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:05.491 "is_configured": true, 00:24:05.491 "data_offset": 2048, 00:24:05.491 "data_size": 63488 00:24:05.491 }, 00:24:05.491 { 00:24:05.491 "name": null, 00:24:05.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:05.491 "is_configured": false, 00:24:05.491 "data_offset": 2048, 00:24:05.491 "data_size": 63488 00:24:05.491 }, 00:24:05.491 { 00:24:05.491 "name": null, 00:24:05.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:05.491 "is_configured": false, 00:24:05.491 "data_offset": 2048, 00:24:05.491 "data_size": 63488 00:24:05.491 }, 00:24:05.491 { 00:24:05.491 "name": null, 00:24:05.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:05.491 "is_configured": false, 00:24:05.491 "data_offset": 2048, 00:24:05.491 "data_size": 63488 00:24:05.491 } 00:24:05.491 ] 00:24:05.491 }' 00:24:05.491 02:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.491 02:29:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.058 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:06.058 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:06.316 [2024-07-11 02:29:56.628588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:06.316 [2024-07-11 02:29:56.628638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.316 [2024-07-11 02:29:56.628662] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14bf120 00:24:06.316 [2024-07-11 02:29:56.628674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.316 [2024-07-11 02:29:56.629011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.316 [2024-07-11 02:29:56.629031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:06.316 [2024-07-11 02:29:56.629096] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:06.316 [2024-07-11 02:29:56.629114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:06.316 pt2 00:24:06.316 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:06.575 [2024-07-11 02:29:56.877262] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.575 02:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.834 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.834 "name": "raid_bdev1", 00:24:06.834 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:24:06.834 "strip_size_kb": 64, 00:24:06.834 "state": "configuring", 00:24:06.834 "raid_level": "concat", 00:24:06.834 "superblock": true, 00:24:06.834 "num_base_bdevs": 4, 00:24:06.834 "num_base_bdevs_discovered": 1, 00:24:06.834 "num_base_bdevs_operational": 4, 00:24:06.834 "base_bdevs_list": [ 00:24:06.834 { 00:24:06.834 "name": "pt1", 00:24:06.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:06.834 "is_configured": true, 00:24:06.834 "data_offset": 2048, 00:24:06.834 "data_size": 63488 00:24:06.834 }, 00:24:06.834 { 00:24:06.834 "name": null, 00:24:06.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:06.834 "is_configured": false, 00:24:06.834 "data_offset": 2048, 00:24:06.834 "data_size": 63488 00:24:06.834 }, 00:24:06.834 { 00:24:06.834 "name": null, 00:24:06.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:06.834 "is_configured": false, 00:24:06.834 "data_offset": 2048, 00:24:06.834 "data_size": 63488 00:24:06.834 }, 00:24:06.834 { 00:24:06.834 "name": null, 00:24:06.834 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:06.834 "is_configured": false, 00:24:06.834 "data_offset": 2048, 00:24:06.834 "data_size": 63488 00:24:06.834 } 00:24:06.834 ] 00:24:06.834 }' 00:24:06.834 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.834 02:29:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.400 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:07.400 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:07.400 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:07.658 [2024-07-11 02:29:57.948084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:07.658 [2024-07-11 02:29:57.948130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.658 [2024-07-11 02:29:57.948153] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14bff10 00:24:07.658 [2024-07-11 02:29:57.948167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.658 [2024-07-11 02:29:57.948492] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.658 [2024-07-11 02:29:57.948511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:07.658 [2024-07-11 02:29:57.948574] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:07.658 [2024-07-11 02:29:57.948592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:07.658 pt2 00:24:07.658 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:07.658 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:07.658 02:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:07.916 [2024-07-11 02:29:58.188715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:07.916 [2024-07-11 02:29:58.188744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.916 [2024-07-11 02:29:58.188765] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130e210 00:24:07.916 [2024-07-11 02:29:58.188778] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.916 [2024-07-11 02:29:58.189038] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.916 [2024-07-11 02:29:58.189057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:07.916 [2024-07-11 02:29:58.189102] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:07.916 [2024-07-11 02:29:58.189119] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:07.916 pt3 00:24:07.916 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:07.916 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:07.916 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:08.174 [2024-07-11 02:29:58.377223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:08.174 [2024-07-11 02:29:58.377258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.174 [2024-07-11 02:29:58.377276] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130dcb0 00:24:08.174 [2024-07-11 02:29:58.377289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.174 [2024-07-11 02:29:58.377573] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.174 [2024-07-11 02:29:58.377591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:08.174 [2024-07-11 02:29:58.377640] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:08.174 [2024-07-11 02:29:58.377657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:08.174 [2024-07-11 02:29:58.377784] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14bd9e0 00:24:08.174 [2024-07-11 02:29:58.377795] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:08.174 [2024-07-11 02:29:58.377955] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14ae7c0 00:24:08.174 [2024-07-11 02:29:58.378078] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14bd9e0 00:24:08.174 [2024-07-11 02:29:58.378088] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14bd9e0 00:24:08.174 [2024-07-11 02:29:58.378178] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.174 pt4 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.174 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:08.174 "name": "raid_bdev1", 00:24:08.174 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:24:08.174 "strip_size_kb": 64, 00:24:08.174 "state": "online", 00:24:08.174 "raid_level": "concat", 00:24:08.174 "superblock": true, 00:24:08.174 "num_base_bdevs": 4, 00:24:08.174 "num_base_bdevs_discovered": 4, 00:24:08.174 "num_base_bdevs_operational": 4, 00:24:08.174 "base_bdevs_list": [ 00:24:08.174 { 00:24:08.174 "name": "pt1", 00:24:08.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:08.174 "is_configured": true, 00:24:08.174 "data_offset": 2048, 00:24:08.174 "data_size": 63488 00:24:08.174 }, 00:24:08.174 { 00:24:08.174 "name": "pt2", 00:24:08.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:08.174 "is_configured": true, 00:24:08.174 "data_offset": 2048, 00:24:08.174 "data_size": 63488 00:24:08.174 }, 00:24:08.174 { 00:24:08.174 "name": "pt3", 00:24:08.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:08.174 "is_configured": true, 00:24:08.175 "data_offset": 2048, 00:24:08.175 "data_size": 63488 00:24:08.175 }, 00:24:08.175 { 00:24:08.175 "name": "pt4", 00:24:08.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:08.175 "is_configured": true, 00:24:08.175 "data_offset": 2048, 00:24:08.175 "data_size": 63488 00:24:08.175 } 00:24:08.175 ] 00:24:08.175 }' 00:24:08.175 02:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:08.175 02:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:09.108 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:09.108 [2024-07-11 02:29:59.396240] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:09.109 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:09.109 "name": "raid_bdev1", 00:24:09.109 "aliases": [ 00:24:09.109 "8ee24ae7-6c77-41f7-b2fd-215a07f4e301" 00:24:09.109 ], 00:24:09.109 "product_name": "Raid Volume", 00:24:09.109 "block_size": 512, 00:24:09.109 "num_blocks": 253952, 00:24:09.109 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:24:09.109 "assigned_rate_limits": { 00:24:09.109 "rw_ios_per_sec": 0, 00:24:09.109 "rw_mbytes_per_sec": 0, 00:24:09.109 "r_mbytes_per_sec": 0, 00:24:09.109 "w_mbytes_per_sec": 0 00:24:09.109 }, 00:24:09.109 "claimed": false, 00:24:09.109 "zoned": false, 00:24:09.109 "supported_io_types": { 00:24:09.109 "read": true, 00:24:09.109 "write": true, 00:24:09.109 "unmap": true, 00:24:09.109 "flush": true, 00:24:09.109 "reset": true, 00:24:09.109 "nvme_admin": false, 00:24:09.109 "nvme_io": false, 00:24:09.109 "nvme_io_md": false, 00:24:09.109 "write_zeroes": true, 00:24:09.109 "zcopy": false, 00:24:09.109 "get_zone_info": false, 00:24:09.109 "zone_management": false, 00:24:09.109 "zone_append": false, 00:24:09.109 "compare": false, 00:24:09.109 "compare_and_write": false, 00:24:09.109 "abort": false, 00:24:09.109 "seek_hole": false, 00:24:09.109 "seek_data": false, 00:24:09.109 "copy": false, 00:24:09.109 "nvme_iov_md": false 00:24:09.109 }, 00:24:09.109 "memory_domains": [ 00:24:09.109 { 00:24:09.109 "dma_device_id": "system", 00:24:09.109 "dma_device_type": 1 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.109 "dma_device_type": 2 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "dma_device_id": "system", 00:24:09.109 "dma_device_type": 1 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.109 "dma_device_type": 2 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "dma_device_id": "system", 00:24:09.109 "dma_device_type": 1 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.109 "dma_device_type": 2 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "dma_device_id": "system", 00:24:09.109 "dma_device_type": 1 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.109 "dma_device_type": 2 00:24:09.109 } 00:24:09.109 ], 00:24:09.109 "driver_specific": { 00:24:09.109 "raid": { 00:24:09.109 "uuid": "8ee24ae7-6c77-41f7-b2fd-215a07f4e301", 00:24:09.109 "strip_size_kb": 64, 00:24:09.109 "state": "online", 00:24:09.109 "raid_level": "concat", 00:24:09.109 "superblock": true, 00:24:09.109 "num_base_bdevs": 4, 00:24:09.109 "num_base_bdevs_discovered": 4, 00:24:09.109 "num_base_bdevs_operational": 4, 00:24:09.109 "base_bdevs_list": [ 00:24:09.109 { 00:24:09.109 "name": "pt1", 00:24:09.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:09.109 "is_configured": true, 00:24:09.109 "data_offset": 2048, 00:24:09.109 "data_size": 63488 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "name": "pt2", 00:24:09.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:09.109 "is_configured": true, 00:24:09.109 "data_offset": 2048, 00:24:09.109 "data_size": 63488 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "name": "pt3", 00:24:09.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:09.109 "is_configured": true, 00:24:09.109 "data_offset": 2048, 00:24:09.109 "data_size": 63488 00:24:09.109 }, 00:24:09.109 { 00:24:09.109 "name": "pt4", 00:24:09.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:09.109 "is_configured": true, 00:24:09.109 "data_offset": 2048, 00:24:09.109 "data_size": 63488 00:24:09.109 } 00:24:09.109 ] 00:24:09.109 } 00:24:09.109 } 00:24:09.109 }' 00:24:09.109 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:09.109 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:09.109 pt2 00:24:09.109 pt3 00:24:09.109 pt4' 00:24:09.109 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:09.109 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:09.109 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:09.368 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:09.368 "name": "pt1", 00:24:09.368 "aliases": [ 00:24:09.368 "00000000-0000-0000-0000-000000000001" 00:24:09.368 ], 00:24:09.368 "product_name": "passthru", 00:24:09.368 "block_size": 512, 00:24:09.368 "num_blocks": 65536, 00:24:09.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:09.368 "assigned_rate_limits": { 00:24:09.368 "rw_ios_per_sec": 0, 00:24:09.368 "rw_mbytes_per_sec": 0, 00:24:09.368 "r_mbytes_per_sec": 0, 00:24:09.368 "w_mbytes_per_sec": 0 00:24:09.368 }, 00:24:09.368 "claimed": true, 00:24:09.368 "claim_type": "exclusive_write", 00:24:09.368 "zoned": false, 00:24:09.368 "supported_io_types": { 00:24:09.368 "read": true, 00:24:09.368 "write": true, 00:24:09.368 "unmap": true, 00:24:09.368 "flush": true, 00:24:09.368 "reset": true, 00:24:09.368 "nvme_admin": false, 00:24:09.368 "nvme_io": false, 00:24:09.368 "nvme_io_md": false, 00:24:09.368 "write_zeroes": true, 00:24:09.368 "zcopy": true, 00:24:09.368 "get_zone_info": false, 00:24:09.368 "zone_management": false, 00:24:09.368 "zone_append": false, 00:24:09.368 "compare": false, 00:24:09.368 "compare_and_write": false, 00:24:09.368 "abort": true, 00:24:09.368 "seek_hole": false, 00:24:09.368 "seek_data": false, 00:24:09.368 "copy": true, 00:24:09.368 "nvme_iov_md": false 00:24:09.368 }, 00:24:09.368 "memory_domains": [ 00:24:09.368 { 00:24:09.368 "dma_device_id": "system", 00:24:09.368 "dma_device_type": 1 00:24:09.368 }, 00:24:09.368 { 00:24:09.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.368 "dma_device_type": 2 00:24:09.368 } 00:24:09.368 ], 00:24:09.368 "driver_specific": { 00:24:09.368 "passthru": { 00:24:09.368 "name": "pt1", 00:24:09.368 "base_bdev_name": "malloc1" 00:24:09.368 } 00:24:09.368 } 00:24:09.368 }' 00:24:09.368 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.368 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:09.627 02:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.627 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.885 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:09.885 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:09.885 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:09.885 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:10.452 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:10.452 "name": "pt2", 00:24:10.452 "aliases": [ 00:24:10.452 "00000000-0000-0000-0000-000000000002" 00:24:10.452 ], 00:24:10.452 "product_name": "passthru", 00:24:10.452 "block_size": 512, 00:24:10.452 "num_blocks": 65536, 00:24:10.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:10.452 "assigned_rate_limits": { 00:24:10.452 "rw_ios_per_sec": 0, 00:24:10.452 "rw_mbytes_per_sec": 0, 00:24:10.452 "r_mbytes_per_sec": 0, 00:24:10.452 "w_mbytes_per_sec": 0 00:24:10.452 }, 00:24:10.452 "claimed": true, 00:24:10.452 "claim_type": "exclusive_write", 00:24:10.452 "zoned": false, 00:24:10.452 "supported_io_types": { 00:24:10.452 "read": true, 00:24:10.452 "write": true, 00:24:10.452 "unmap": true, 00:24:10.452 "flush": true, 00:24:10.452 "reset": true, 00:24:10.452 "nvme_admin": false, 00:24:10.452 "nvme_io": false, 00:24:10.452 "nvme_io_md": false, 00:24:10.452 "write_zeroes": true, 00:24:10.452 "zcopy": true, 00:24:10.452 "get_zone_info": false, 00:24:10.452 "zone_management": false, 00:24:10.452 "zone_append": false, 00:24:10.452 "compare": false, 00:24:10.452 "compare_and_write": false, 00:24:10.452 "abort": true, 00:24:10.452 "seek_hole": false, 00:24:10.452 "seek_data": false, 00:24:10.452 "copy": true, 00:24:10.452 "nvme_iov_md": false 00:24:10.452 }, 00:24:10.452 "memory_domains": [ 00:24:10.452 { 00:24:10.452 "dma_device_id": "system", 00:24:10.452 "dma_device_type": 1 00:24:10.452 }, 00:24:10.452 { 00:24:10.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.452 "dma_device_type": 2 00:24:10.452 } 00:24:10.452 ], 00:24:10.452 "driver_specific": { 00:24:10.452 "passthru": { 00:24:10.452 "name": "pt2", 00:24:10.452 "base_bdev_name": "malloc2" 00:24:10.452 } 00:24:10.453 } 00:24:10.453 }' 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:10.453 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.711 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.711 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:10.711 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:10.711 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:10.711 02:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:10.969 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:10.969 "name": "pt3", 00:24:10.969 "aliases": [ 00:24:10.969 "00000000-0000-0000-0000-000000000003" 00:24:10.969 ], 00:24:10.969 "product_name": "passthru", 00:24:10.969 "block_size": 512, 00:24:10.969 "num_blocks": 65536, 00:24:10.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:10.969 "assigned_rate_limits": { 00:24:10.969 "rw_ios_per_sec": 0, 00:24:10.969 "rw_mbytes_per_sec": 0, 00:24:10.969 "r_mbytes_per_sec": 0, 00:24:10.969 "w_mbytes_per_sec": 0 00:24:10.969 }, 00:24:10.969 "claimed": true, 00:24:10.969 "claim_type": "exclusive_write", 00:24:10.969 "zoned": false, 00:24:10.969 "supported_io_types": { 00:24:10.969 "read": true, 00:24:10.969 "write": true, 00:24:10.969 "unmap": true, 00:24:10.969 "flush": true, 00:24:10.969 "reset": true, 00:24:10.969 "nvme_admin": false, 00:24:10.969 "nvme_io": false, 00:24:10.969 "nvme_io_md": false, 00:24:10.969 "write_zeroes": true, 00:24:10.969 "zcopy": true, 00:24:10.969 "get_zone_info": false, 00:24:10.969 "zone_management": false, 00:24:10.969 "zone_append": false, 00:24:10.969 "compare": false, 00:24:10.969 "compare_and_write": false, 00:24:10.969 "abort": true, 00:24:10.969 "seek_hole": false, 00:24:10.969 "seek_data": false, 00:24:10.969 "copy": true, 00:24:10.969 "nvme_iov_md": false 00:24:10.969 }, 00:24:10.969 "memory_domains": [ 00:24:10.969 { 00:24:10.969 "dma_device_id": "system", 00:24:10.969 "dma_device_type": 1 00:24:10.969 }, 00:24:10.969 { 00:24:10.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.969 "dma_device_type": 2 00:24:10.969 } 00:24:10.969 ], 00:24:10.969 "driver_specific": { 00:24:10.969 "passthru": { 00:24:10.969 "name": "pt3", 00:24:10.969 "base_bdev_name": "malloc3" 00:24:10.969 } 00:24:10.969 } 00:24:10.969 }' 00:24:10.969 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.969 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.970 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:10.970 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.970 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.970 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:10.970 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.228 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.229 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.229 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.229 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.229 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:11.229 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:11.229 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:11.229 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:11.487 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:11.487 "name": "pt4", 00:24:11.487 "aliases": [ 00:24:11.487 "00000000-0000-0000-0000-000000000004" 00:24:11.487 ], 00:24:11.487 "product_name": "passthru", 00:24:11.487 "block_size": 512, 00:24:11.487 "num_blocks": 65536, 00:24:11.487 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:11.487 "assigned_rate_limits": { 00:24:11.487 "rw_ios_per_sec": 0, 00:24:11.487 "rw_mbytes_per_sec": 0, 00:24:11.487 "r_mbytes_per_sec": 0, 00:24:11.487 "w_mbytes_per_sec": 0 00:24:11.487 }, 00:24:11.487 "claimed": true, 00:24:11.487 "claim_type": "exclusive_write", 00:24:11.487 "zoned": false, 00:24:11.487 "supported_io_types": { 00:24:11.487 "read": true, 00:24:11.487 "write": true, 00:24:11.487 "unmap": true, 00:24:11.487 "flush": true, 00:24:11.487 "reset": true, 00:24:11.487 "nvme_admin": false, 00:24:11.487 "nvme_io": false, 00:24:11.487 "nvme_io_md": false, 00:24:11.487 "write_zeroes": true, 00:24:11.487 "zcopy": true, 00:24:11.487 "get_zone_info": false, 00:24:11.487 "zone_management": false, 00:24:11.487 "zone_append": false, 00:24:11.487 "compare": false, 00:24:11.487 "compare_and_write": false, 00:24:11.487 "abort": true, 00:24:11.487 "seek_hole": false, 00:24:11.487 "seek_data": false, 00:24:11.487 "copy": true, 00:24:11.487 "nvme_iov_md": false 00:24:11.487 }, 00:24:11.487 "memory_domains": [ 00:24:11.487 { 00:24:11.487 "dma_device_id": "system", 00:24:11.487 "dma_device_type": 1 00:24:11.487 }, 00:24:11.487 { 00:24:11.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.487 "dma_device_type": 2 00:24:11.487 } 00:24:11.487 ], 00:24:11.487 "driver_specific": { 00:24:11.487 "passthru": { 00:24:11.487 "name": "pt4", 00:24:11.487 "base_bdev_name": "malloc4" 00:24:11.487 } 00:24:11.487 } 00:24:11.487 }' 00:24:11.487 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.487 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.487 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:11.487 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.746 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.746 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:11.746 02:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.746 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.746 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.746 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.746 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.746 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:11.746 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:11.746 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:12.005 [2024-07-11 02:30:02.372147] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8ee24ae7-6c77-41f7-b2fd-215a07f4e301 '!=' 8ee24ae7-6c77-41f7-b2fd-215a07f4e301 ']' 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1983835 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1983835 ']' 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1983835 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.005 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1983835 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1983835' 00:24:12.264 killing process with pid 1983835 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1983835 00:24:12.264 [2024-07-11 02:30:02.444264] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:12.264 [2024-07-11 02:30:02.444323] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:12.264 [2024-07-11 02:30:02.444386] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:12.264 [2024-07-11 02:30:02.444398] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14bd9e0 name raid_bdev1, state offline 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1983835 00:24:12.264 [2024-07-11 02:30:02.486855] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:12.264 00:24:12.264 real 0m16.879s 00:24:12.264 user 0m30.363s 00:24:12.264 sys 0m3.170s 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.264 02:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.264 ************************************ 00:24:12.264 END TEST raid_superblock_test 00:24:12.264 ************************************ 00:24:12.524 02:30:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:12.524 02:30:02 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:24:12.524 02:30:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:12.524 02:30:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.524 02:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:12.524 ************************************ 00:24:12.524 START TEST raid_read_error_test 00:24:12.524 ************************************ 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.HTE2WQWYYa 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1986505 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1986505 /var/tmp/spdk-raid.sock 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1986505 ']' 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:12.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.524 02:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.524 [2024-07-11 02:30:02.849751] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:12.524 [2024-07-11 02:30:02.849809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1986505 ] 00:24:12.783 [2024-07-11 02:30:02.970793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.783 [2024-07-11 02:30:03.019406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.783 [2024-07-11 02:30:03.074831] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.783 [2024-07-11 02:30:03.074856] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.783 02:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.783 02:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:12.783 02:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:12.783 02:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:13.042 BaseBdev1_malloc 00:24:13.042 02:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:13.302 true 00:24:13.302 02:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:13.561 [2024-07-11 02:30:03.865547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:13.561 [2024-07-11 02:30:03.865595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.561 [2024-07-11 02:30:03.865615] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20ca330 00:24:13.561 [2024-07-11 02:30:03.865628] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.561 [2024-07-11 02:30:03.867389] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.561 [2024-07-11 02:30:03.867420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:13.561 BaseBdev1 00:24:13.561 02:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:13.561 02:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:13.820 BaseBdev2_malloc 00:24:13.820 02:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:14.078 true 00:24:14.078 02:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:14.337 [2024-07-11 02:30:04.616005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:14.337 [2024-07-11 02:30:04.616046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.337 [2024-07-11 02:30:04.616066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20c3b40 00:24:14.337 [2024-07-11 02:30:04.616079] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.337 [2024-07-11 02:30:04.617541] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.337 [2024-07-11 02:30:04.617571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:14.337 BaseBdev2 00:24:14.337 02:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:14.337 02:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:14.596 BaseBdev3_malloc 00:24:14.596 02:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:14.855 true 00:24:14.855 02:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:15.115 [2024-07-11 02:30:05.366474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:15.115 [2024-07-11 02:30:05.366520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.115 [2024-07-11 02:30:05.366541] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20c70f0 00:24:15.115 [2024-07-11 02:30:05.366554] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.115 [2024-07-11 02:30:05.368002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.115 [2024-07-11 02:30:05.368033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:15.115 BaseBdev3 00:24:15.115 02:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:15.115 02:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:15.374 BaseBdev4_malloc 00:24:15.374 02:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:15.634 true 00:24:15.634 02:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:15.893 [2024-07-11 02:30:06.060756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:15.893 [2024-07-11 02:30:06.060803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.893 [2024-07-11 02:30:06.060825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f184c0 00:24:15.893 [2024-07-11 02:30:06.060837] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.893 [2024-07-11 02:30:06.062239] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.893 [2024-07-11 02:30:06.062266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:15.893 BaseBdev4 00:24:15.893 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:15.893 [2024-07-11 02:30:06.309462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.893 [2024-07-11 02:30:06.310750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:15.893 [2024-07-11 02:30:06.310825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:15.893 [2024-07-11 02:30:06.310883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:15.893 [2024-07-11 02:30:06.311102] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20bf890 00:24:15.893 [2024-07-11 02:30:06.311113] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:15.893 [2024-07-11 02:30:06.311310] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20bf860 00:24:15.893 [2024-07-11 02:30:06.311468] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20bf890 00:24:15.893 [2024-07-11 02:30:06.311478] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20bf890 00:24:15.893 [2024-07-11 02:30:06.311578] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.152 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.412 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.412 "name": "raid_bdev1", 00:24:16.412 "uuid": "411d4d46-a4c2-4a9d-b0c0-088204d3e8aa", 00:24:16.412 "strip_size_kb": 64, 00:24:16.412 "state": "online", 00:24:16.412 "raid_level": "concat", 00:24:16.412 "superblock": true, 00:24:16.412 "num_base_bdevs": 4, 00:24:16.412 "num_base_bdevs_discovered": 4, 00:24:16.412 "num_base_bdevs_operational": 4, 00:24:16.412 "base_bdevs_list": [ 00:24:16.412 { 00:24:16.412 "name": "BaseBdev1", 00:24:16.412 "uuid": "03eda877-d2aa-55c9-b792-93e3cd7ce379", 00:24:16.412 "is_configured": true, 00:24:16.412 "data_offset": 2048, 00:24:16.412 "data_size": 63488 00:24:16.412 }, 00:24:16.412 { 00:24:16.412 "name": "BaseBdev2", 00:24:16.412 "uuid": "8aad3099-666a-50f3-afea-9784052a75f9", 00:24:16.412 "is_configured": true, 00:24:16.412 "data_offset": 2048, 00:24:16.412 "data_size": 63488 00:24:16.412 }, 00:24:16.412 { 00:24:16.412 "name": "BaseBdev3", 00:24:16.412 "uuid": "d8964330-72ab-52a3-a4d0-851207dce235", 00:24:16.412 "is_configured": true, 00:24:16.412 "data_offset": 2048, 00:24:16.412 "data_size": 63488 00:24:16.412 }, 00:24:16.412 { 00:24:16.412 "name": "BaseBdev4", 00:24:16.412 "uuid": "d6d1c755-bed8-5890-9a42-ff44cabb034e", 00:24:16.412 "is_configured": true, 00:24:16.412 "data_offset": 2048, 00:24:16.412 "data_size": 63488 00:24:16.412 } 00:24:16.412 ] 00:24:16.412 }' 00:24:16.412 02:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.412 02:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.981 02:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:16.981 02:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:16.981 [2024-07-11 02:30:07.308375] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f12e40 00:24:17.917 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.176 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.436 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:18.436 "name": "raid_bdev1", 00:24:18.436 "uuid": "411d4d46-a4c2-4a9d-b0c0-088204d3e8aa", 00:24:18.436 "strip_size_kb": 64, 00:24:18.436 "state": "online", 00:24:18.436 "raid_level": "concat", 00:24:18.436 "superblock": true, 00:24:18.436 "num_base_bdevs": 4, 00:24:18.436 "num_base_bdevs_discovered": 4, 00:24:18.436 "num_base_bdevs_operational": 4, 00:24:18.436 "base_bdevs_list": [ 00:24:18.436 { 00:24:18.436 "name": "BaseBdev1", 00:24:18.436 "uuid": "03eda877-d2aa-55c9-b792-93e3cd7ce379", 00:24:18.436 "is_configured": true, 00:24:18.436 "data_offset": 2048, 00:24:18.436 "data_size": 63488 00:24:18.436 }, 00:24:18.436 { 00:24:18.436 "name": "BaseBdev2", 00:24:18.436 "uuid": "8aad3099-666a-50f3-afea-9784052a75f9", 00:24:18.436 "is_configured": true, 00:24:18.436 "data_offset": 2048, 00:24:18.436 "data_size": 63488 00:24:18.436 }, 00:24:18.436 { 00:24:18.436 "name": "BaseBdev3", 00:24:18.436 "uuid": "d8964330-72ab-52a3-a4d0-851207dce235", 00:24:18.436 "is_configured": true, 00:24:18.436 "data_offset": 2048, 00:24:18.436 "data_size": 63488 00:24:18.436 }, 00:24:18.436 { 00:24:18.436 "name": "BaseBdev4", 00:24:18.436 "uuid": "d6d1c755-bed8-5890-9a42-ff44cabb034e", 00:24:18.436 "is_configured": true, 00:24:18.436 "data_offset": 2048, 00:24:18.436 "data_size": 63488 00:24:18.436 } 00:24:18.436 ] 00:24:18.436 }' 00:24:18.436 02:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:18.436 02:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.003 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:19.262 [2024-07-11 02:30:09.578440] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:19.262 [2024-07-11 02:30:09.578481] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:19.262 [2024-07-11 02:30:09.581640] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.262 [2024-07-11 02:30:09.581684] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.262 [2024-07-11 02:30:09.581724] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.262 [2024-07-11 02:30:09.581736] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20bf890 name raid_bdev1, state offline 00:24:19.262 0 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1986505 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1986505 ']' 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1986505 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1986505 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1986505' 00:24:19.262 killing process with pid 1986505 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1986505 00:24:19.262 [2024-07-11 02:30:09.661907] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:19.262 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1986505 00:24:19.521 [2024-07-11 02:30:09.693917] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.HTE2WQWYYa 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:24:19.521 00:24:19.521 real 0m7.136s 00:24:19.521 user 0m11.647s 00:24:19.521 sys 0m1.368s 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.521 02:30:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.521 ************************************ 00:24:19.521 END TEST raid_read_error_test 00:24:19.521 ************************************ 00:24:19.781 02:30:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:19.781 02:30:09 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:24:19.781 02:30:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:19.781 02:30:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.781 02:30:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.781 ************************************ 00:24:19.781 START TEST raid_write_error_test 00:24:19.781 ************************************ 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.fjaYNVkXDc 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1987876 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1987876 /var/tmp/spdk-raid.sock 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1987876 ']' 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:19.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.781 02:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.781 [2024-07-11 02:30:10.089101] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:19.781 [2024-07-11 02:30:10.089177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987876 ] 00:24:20.040 [2024-07-11 02:30:10.226854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.040 [2024-07-11 02:30:10.279788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.040 [2024-07-11 02:30:10.341579] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:20.040 [2024-07-11 02:30:10.341615] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:20.976 02:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.976 02:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:20.977 02:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:20.977 02:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:20.977 BaseBdev1_malloc 00:24:20.977 02:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:21.235 true 00:24:21.235 02:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:21.494 [2024-07-11 02:30:11.750074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:21.494 [2024-07-11 02:30:11.750114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.494 [2024-07-11 02:30:11.750133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ae0330 00:24:21.494 [2024-07-11 02:30:11.750145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.494 [2024-07-11 02:30:11.751799] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.494 [2024-07-11 02:30:11.751829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:21.494 BaseBdev1 00:24:21.494 02:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:21.494 02:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:21.754 BaseBdev2_malloc 00:24:21.754 02:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:22.013 true 00:24:22.013 02:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:22.273 [2024-07-11 02:30:12.508552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:22.273 [2024-07-11 02:30:12.508593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.273 [2024-07-11 02:30:12.508611] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad9b40 00:24:22.273 [2024-07-11 02:30:12.508624] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.273 [2024-07-11 02:30:12.509963] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.273 [2024-07-11 02:30:12.509989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:22.273 BaseBdev2 00:24:22.273 02:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:22.273 02:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:22.532 BaseBdev3_malloc 00:24:22.532 02:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:22.790 true 00:24:22.790 02:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:23.049 [2024-07-11 02:30:13.263007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:23.049 [2024-07-11 02:30:13.263048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.049 [2024-07-11 02:30:13.263069] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1add0f0 00:24:23.049 [2024-07-11 02:30:13.263081] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.049 [2024-07-11 02:30:13.264527] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.049 [2024-07-11 02:30:13.264555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:23.049 BaseBdev3 00:24:23.049 02:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:23.049 02:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:23.308 BaseBdev4_malloc 00:24:23.308 02:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:23.568 true 00:24:23.568 02:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:23.568 [2024-07-11 02:30:13.965326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:23.568 [2024-07-11 02:30:13.965368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.568 [2024-07-11 02:30:13.965388] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x192e4c0 00:24:23.568 [2024-07-11 02:30:13.965401] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.568 [2024-07-11 02:30:13.966788] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.568 [2024-07-11 02:30:13.966815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:23.568 BaseBdev4 00:24:23.568 02:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:23.828 [2024-07-11 02:30:14.218043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:23.828 [2024-07-11 02:30:14.219236] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:23.828 [2024-07-11 02:30:14.219303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:23.828 [2024-07-11 02:30:14.219363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:23.828 [2024-07-11 02:30:14.219579] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ad5890 00:24:23.828 [2024-07-11 02:30:14.219590] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:23.828 [2024-07-11 02:30:14.219779] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ad5860 00:24:23.828 [2024-07-11 02:30:14.219931] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ad5890 00:24:23.828 [2024-07-11 02:30:14.219941] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ad5890 00:24:23.828 [2024-07-11 02:30:14.220037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.828 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.086 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.086 "name": "raid_bdev1", 00:24:24.086 "uuid": "d9a201e8-efc6-48fd-b394-2fb26cc3151d", 00:24:24.086 "strip_size_kb": 64, 00:24:24.086 "state": "online", 00:24:24.086 "raid_level": "concat", 00:24:24.086 "superblock": true, 00:24:24.086 "num_base_bdevs": 4, 00:24:24.086 "num_base_bdevs_discovered": 4, 00:24:24.086 "num_base_bdevs_operational": 4, 00:24:24.086 "base_bdevs_list": [ 00:24:24.086 { 00:24:24.086 "name": "BaseBdev1", 00:24:24.086 "uuid": "624ed737-82b0-5b51-9c8f-17cf60d7d256", 00:24:24.086 "is_configured": true, 00:24:24.086 "data_offset": 2048, 00:24:24.086 "data_size": 63488 00:24:24.086 }, 00:24:24.086 { 00:24:24.086 "name": "BaseBdev2", 00:24:24.086 "uuid": "5f19dde2-f879-54bf-ad1f-d63760920944", 00:24:24.086 "is_configured": true, 00:24:24.086 "data_offset": 2048, 00:24:24.086 "data_size": 63488 00:24:24.086 }, 00:24:24.086 { 00:24:24.086 "name": "BaseBdev3", 00:24:24.086 "uuid": "143c05b6-8235-5f78-bb6c-bf5e4d529fa3", 00:24:24.086 "is_configured": true, 00:24:24.086 "data_offset": 2048, 00:24:24.086 "data_size": 63488 00:24:24.086 }, 00:24:24.086 { 00:24:24.086 "name": "BaseBdev4", 00:24:24.086 "uuid": "2c74223a-13af-5347-912c-efbb4b99a1b3", 00:24:24.086 "is_configured": true, 00:24:24.086 "data_offset": 2048, 00:24:24.086 "data_size": 63488 00:24:24.086 } 00:24:24.086 ] 00:24:24.086 }' 00:24:24.086 02:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.086 02:30:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.023 02:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:25.023 02:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:25.023 [2024-07-11 02:30:15.208916] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1928e40 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.060 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.320 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.320 "name": "raid_bdev1", 00:24:26.320 "uuid": "d9a201e8-efc6-48fd-b394-2fb26cc3151d", 00:24:26.320 "strip_size_kb": 64, 00:24:26.320 "state": "online", 00:24:26.320 "raid_level": "concat", 00:24:26.320 "superblock": true, 00:24:26.320 "num_base_bdevs": 4, 00:24:26.320 "num_base_bdevs_discovered": 4, 00:24:26.320 "num_base_bdevs_operational": 4, 00:24:26.320 "base_bdevs_list": [ 00:24:26.320 { 00:24:26.320 "name": "BaseBdev1", 00:24:26.320 "uuid": "624ed737-82b0-5b51-9c8f-17cf60d7d256", 00:24:26.320 "is_configured": true, 00:24:26.320 "data_offset": 2048, 00:24:26.320 "data_size": 63488 00:24:26.320 }, 00:24:26.320 { 00:24:26.320 "name": "BaseBdev2", 00:24:26.320 "uuid": "5f19dde2-f879-54bf-ad1f-d63760920944", 00:24:26.320 "is_configured": true, 00:24:26.320 "data_offset": 2048, 00:24:26.320 "data_size": 63488 00:24:26.320 }, 00:24:26.320 { 00:24:26.320 "name": "BaseBdev3", 00:24:26.320 "uuid": "143c05b6-8235-5f78-bb6c-bf5e4d529fa3", 00:24:26.320 "is_configured": true, 00:24:26.320 "data_offset": 2048, 00:24:26.320 "data_size": 63488 00:24:26.320 }, 00:24:26.320 { 00:24:26.320 "name": "BaseBdev4", 00:24:26.320 "uuid": "2c74223a-13af-5347-912c-efbb4b99a1b3", 00:24:26.320 "is_configured": true, 00:24:26.320 "data_offset": 2048, 00:24:26.320 "data_size": 63488 00:24:26.320 } 00:24:26.320 ] 00:24:26.320 }' 00:24:26.320 02:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.320 02:30:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.889 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:27.148 [2024-07-11 02:30:17.458893] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:27.148 [2024-07-11 02:30:17.458931] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:27.148 [2024-07-11 02:30:17.462103] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.148 [2024-07-11 02:30:17.462148] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.148 [2024-07-11 02:30:17.462188] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.148 [2024-07-11 02:30:17.462199] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad5890 name raid_bdev1, state offline 00:24:27.148 0 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1987876 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1987876 ']' 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1987876 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1987876 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1987876' 00:24:27.148 killing process with pid 1987876 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1987876 00:24:27.148 [2024-07-11 02:30:17.544051] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.148 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1987876 00:24:27.407 [2024-07-11 02:30:17.575226] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.fjaYNVkXDc 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:24:27.407 00:24:27.407 real 0m7.792s 00:24:27.407 user 0m12.424s 00:24:27.407 sys 0m1.455s 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:27.407 02:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.407 ************************************ 00:24:27.407 END TEST raid_write_error_test 00:24:27.407 ************************************ 00:24:27.666 02:30:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:27.666 02:30:17 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:24:27.666 02:30:17 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:24:27.666 02:30:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:27.666 02:30:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.667 02:30:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:27.667 ************************************ 00:24:27.667 START TEST raid_state_function_test 00:24:27.667 ************************************ 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1989028 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1989028' 00:24:27.667 Process raid pid: 1989028 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1989028 /var/tmp/spdk-raid.sock 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1989028 ']' 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:27.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.667 02:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.667 [2024-07-11 02:30:17.958752] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:27.667 [2024-07-11 02:30:17.958820] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.667 [2024-07-11 02:30:18.081570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.926 [2024-07-11 02:30:18.133466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.926 [2024-07-11 02:30:18.191684] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.926 [2024-07-11 02:30:18.191718] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.926 02:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.926 02:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:24:27.926 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:28.185 [2024-07-11 02:30:18.399971] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:28.185 [2024-07-11 02:30:18.400011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:28.185 [2024-07-11 02:30:18.400021] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:28.185 [2024-07-11 02:30:18.400033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:28.185 [2024-07-11 02:30:18.400042] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:28.185 [2024-07-11 02:30:18.400053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:28.185 [2024-07-11 02:30:18.400062] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:28.185 [2024-07-11 02:30:18.400073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.185 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.445 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.445 "name": "Existed_Raid", 00:24:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.445 "strip_size_kb": 0, 00:24:28.445 "state": "configuring", 00:24:28.445 "raid_level": "raid1", 00:24:28.445 "superblock": false, 00:24:28.445 "num_base_bdevs": 4, 00:24:28.445 "num_base_bdevs_discovered": 0, 00:24:28.445 "num_base_bdevs_operational": 4, 00:24:28.445 "base_bdevs_list": [ 00:24:28.445 { 00:24:28.445 "name": "BaseBdev1", 00:24:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.445 "is_configured": false, 00:24:28.445 "data_offset": 0, 00:24:28.445 "data_size": 0 00:24:28.445 }, 00:24:28.445 { 00:24:28.445 "name": "BaseBdev2", 00:24:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.445 "is_configured": false, 00:24:28.445 "data_offset": 0, 00:24:28.445 "data_size": 0 00:24:28.445 }, 00:24:28.445 { 00:24:28.445 "name": "BaseBdev3", 00:24:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.445 "is_configured": false, 00:24:28.445 "data_offset": 0, 00:24:28.445 "data_size": 0 00:24:28.445 }, 00:24:28.445 { 00:24:28.445 "name": "BaseBdev4", 00:24:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.445 "is_configured": false, 00:24:28.445 "data_offset": 0, 00:24:28.445 "data_size": 0 00:24:28.445 } 00:24:28.445 ] 00:24:28.445 }' 00:24:28.445 02:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.445 02:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.014 02:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:29.014 [2024-07-11 02:30:19.422547] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:29.014 [2024-07-11 02:30:19.422577] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x178e730 name Existed_Raid, state configuring 00:24:29.273 02:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:29.273 [2024-07-11 02:30:19.663203] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:29.273 [2024-07-11 02:30:19.663232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:29.274 [2024-07-11 02:30:19.663242] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:29.274 [2024-07-11 02:30:19.663253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:29.274 [2024-07-11 02:30:19.663262] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:29.274 [2024-07-11 02:30:19.663273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:29.274 [2024-07-11 02:30:19.663281] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:29.274 [2024-07-11 02:30:19.663292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:29.274 02:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:29.533 [2024-07-11 02:30:19.913728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:29.533 BaseBdev1 00:24:29.533 02:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:29.533 02:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:29.533 02:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:29.533 02:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:29.533 02:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:29.533 02:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:29.533 02:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:29.792 02:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:30.051 [ 00:24:30.051 { 00:24:30.051 "name": "BaseBdev1", 00:24:30.051 "aliases": [ 00:24:30.051 "c55640e6-8197-48b2-affb-f261a453b7b5" 00:24:30.051 ], 00:24:30.051 "product_name": "Malloc disk", 00:24:30.051 "block_size": 512, 00:24:30.051 "num_blocks": 65536, 00:24:30.051 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:30.051 "assigned_rate_limits": { 00:24:30.051 "rw_ios_per_sec": 0, 00:24:30.051 "rw_mbytes_per_sec": 0, 00:24:30.051 "r_mbytes_per_sec": 0, 00:24:30.051 "w_mbytes_per_sec": 0 00:24:30.051 }, 00:24:30.051 "claimed": true, 00:24:30.051 "claim_type": "exclusive_write", 00:24:30.051 "zoned": false, 00:24:30.051 "supported_io_types": { 00:24:30.051 "read": true, 00:24:30.051 "write": true, 00:24:30.051 "unmap": true, 00:24:30.051 "flush": true, 00:24:30.051 "reset": true, 00:24:30.051 "nvme_admin": false, 00:24:30.051 "nvme_io": false, 00:24:30.051 "nvme_io_md": false, 00:24:30.051 "write_zeroes": true, 00:24:30.051 "zcopy": true, 00:24:30.051 "get_zone_info": false, 00:24:30.051 "zone_management": false, 00:24:30.051 "zone_append": false, 00:24:30.051 "compare": false, 00:24:30.051 "compare_and_write": false, 00:24:30.051 "abort": true, 00:24:30.051 "seek_hole": false, 00:24:30.051 "seek_data": false, 00:24:30.051 "copy": true, 00:24:30.051 "nvme_iov_md": false 00:24:30.051 }, 00:24:30.051 "memory_domains": [ 00:24:30.051 { 00:24:30.051 "dma_device_id": "system", 00:24:30.051 "dma_device_type": 1 00:24:30.051 }, 00:24:30.051 { 00:24:30.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.051 "dma_device_type": 2 00:24:30.051 } 00:24:30.051 ], 00:24:30.051 "driver_specific": {} 00:24:30.051 } 00:24:30.051 ] 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.051 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.310 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.310 "name": "Existed_Raid", 00:24:30.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.310 "strip_size_kb": 0, 00:24:30.310 "state": "configuring", 00:24:30.310 "raid_level": "raid1", 00:24:30.310 "superblock": false, 00:24:30.310 "num_base_bdevs": 4, 00:24:30.310 "num_base_bdevs_discovered": 1, 00:24:30.310 "num_base_bdevs_operational": 4, 00:24:30.310 "base_bdevs_list": [ 00:24:30.310 { 00:24:30.310 "name": "BaseBdev1", 00:24:30.310 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:30.310 "is_configured": true, 00:24:30.310 "data_offset": 0, 00:24:30.310 "data_size": 65536 00:24:30.310 }, 00:24:30.310 { 00:24:30.310 "name": "BaseBdev2", 00:24:30.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.310 "is_configured": false, 00:24:30.310 "data_offset": 0, 00:24:30.310 "data_size": 0 00:24:30.310 }, 00:24:30.310 { 00:24:30.310 "name": "BaseBdev3", 00:24:30.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.310 "is_configured": false, 00:24:30.310 "data_offset": 0, 00:24:30.310 "data_size": 0 00:24:30.310 }, 00:24:30.310 { 00:24:30.310 "name": "BaseBdev4", 00:24:30.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.310 "is_configured": false, 00:24:30.310 "data_offset": 0, 00:24:30.310 "data_size": 0 00:24:30.310 } 00:24:30.310 ] 00:24:30.311 }' 00:24:30.311 02:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.311 02:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:30.877 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.135 [2024-07-11 02:30:21.477890] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.135 [2024-07-11 02:30:21.477933] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x178e060 name Existed_Raid, state configuring 00:24:31.136 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:31.394 [2024-07-11 02:30:21.722563] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:31.394 [2024-07-11 02:30:21.723970] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:31.394 [2024-07-11 02:30:21.724003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:31.394 [2024-07-11 02:30:21.724013] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:31.394 [2024-07-11 02:30:21.724025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:31.394 [2024-07-11 02:30:21.724034] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:31.394 [2024-07-11 02:30:21.724045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.394 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.653 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.653 "name": "Existed_Raid", 00:24:31.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.653 "strip_size_kb": 0, 00:24:31.653 "state": "configuring", 00:24:31.653 "raid_level": "raid1", 00:24:31.653 "superblock": false, 00:24:31.653 "num_base_bdevs": 4, 00:24:31.653 "num_base_bdevs_discovered": 1, 00:24:31.653 "num_base_bdevs_operational": 4, 00:24:31.653 "base_bdevs_list": [ 00:24:31.653 { 00:24:31.653 "name": "BaseBdev1", 00:24:31.653 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:31.653 "is_configured": true, 00:24:31.653 "data_offset": 0, 00:24:31.653 "data_size": 65536 00:24:31.653 }, 00:24:31.653 { 00:24:31.653 "name": "BaseBdev2", 00:24:31.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.653 "is_configured": false, 00:24:31.653 "data_offset": 0, 00:24:31.653 "data_size": 0 00:24:31.653 }, 00:24:31.653 { 00:24:31.653 "name": "BaseBdev3", 00:24:31.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.653 "is_configured": false, 00:24:31.653 "data_offset": 0, 00:24:31.653 "data_size": 0 00:24:31.653 }, 00:24:31.653 { 00:24:31.653 "name": "BaseBdev4", 00:24:31.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.653 "is_configured": false, 00:24:31.653 "data_offset": 0, 00:24:31.653 "data_size": 0 00:24:31.653 } 00:24:31.653 ] 00:24:31.653 }' 00:24:31.653 02:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.653 02:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.219 02:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:32.477 [2024-07-11 02:30:22.836924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:32.477 BaseBdev2 00:24:32.477 02:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:32.477 02:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:32.477 02:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:32.477 02:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:32.477 02:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:32.477 02:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:32.477 02:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:32.735 02:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:32.993 [ 00:24:32.993 { 00:24:32.993 "name": "BaseBdev2", 00:24:32.993 "aliases": [ 00:24:32.993 "b6b9ccea-a500-4a80-a50c-781b1a853ee0" 00:24:32.993 ], 00:24:32.993 "product_name": "Malloc disk", 00:24:32.993 "block_size": 512, 00:24:32.993 "num_blocks": 65536, 00:24:32.993 "uuid": "b6b9ccea-a500-4a80-a50c-781b1a853ee0", 00:24:32.993 "assigned_rate_limits": { 00:24:32.993 "rw_ios_per_sec": 0, 00:24:32.993 "rw_mbytes_per_sec": 0, 00:24:32.993 "r_mbytes_per_sec": 0, 00:24:32.993 "w_mbytes_per_sec": 0 00:24:32.993 }, 00:24:32.993 "claimed": true, 00:24:32.993 "claim_type": "exclusive_write", 00:24:32.993 "zoned": false, 00:24:32.993 "supported_io_types": { 00:24:32.993 "read": true, 00:24:32.993 "write": true, 00:24:32.993 "unmap": true, 00:24:32.993 "flush": true, 00:24:32.993 "reset": true, 00:24:32.993 "nvme_admin": false, 00:24:32.994 "nvme_io": false, 00:24:32.994 "nvme_io_md": false, 00:24:32.994 "write_zeroes": true, 00:24:32.994 "zcopy": true, 00:24:32.994 "get_zone_info": false, 00:24:32.994 "zone_management": false, 00:24:32.994 "zone_append": false, 00:24:32.994 "compare": false, 00:24:32.994 "compare_and_write": false, 00:24:32.994 "abort": true, 00:24:32.994 "seek_hole": false, 00:24:32.994 "seek_data": false, 00:24:32.994 "copy": true, 00:24:32.994 "nvme_iov_md": false 00:24:32.994 }, 00:24:32.994 "memory_domains": [ 00:24:32.994 { 00:24:32.994 "dma_device_id": "system", 00:24:32.994 "dma_device_type": 1 00:24:32.994 }, 00:24:32.994 { 00:24:32.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.994 "dma_device_type": 2 00:24:32.994 } 00:24:32.994 ], 00:24:32.994 "driver_specific": {} 00:24:32.994 } 00:24:32.994 ] 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.994 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.252 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.252 "name": "Existed_Raid", 00:24:33.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.252 "strip_size_kb": 0, 00:24:33.252 "state": "configuring", 00:24:33.252 "raid_level": "raid1", 00:24:33.252 "superblock": false, 00:24:33.252 "num_base_bdevs": 4, 00:24:33.252 "num_base_bdevs_discovered": 2, 00:24:33.252 "num_base_bdevs_operational": 4, 00:24:33.252 "base_bdevs_list": [ 00:24:33.252 { 00:24:33.252 "name": "BaseBdev1", 00:24:33.252 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:33.252 "is_configured": true, 00:24:33.252 "data_offset": 0, 00:24:33.252 "data_size": 65536 00:24:33.252 }, 00:24:33.252 { 00:24:33.252 "name": "BaseBdev2", 00:24:33.252 "uuid": "b6b9ccea-a500-4a80-a50c-781b1a853ee0", 00:24:33.252 "is_configured": true, 00:24:33.252 "data_offset": 0, 00:24:33.252 "data_size": 65536 00:24:33.252 }, 00:24:33.252 { 00:24:33.252 "name": "BaseBdev3", 00:24:33.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.252 "is_configured": false, 00:24:33.252 "data_offset": 0, 00:24:33.252 "data_size": 0 00:24:33.252 }, 00:24:33.252 { 00:24:33.252 "name": "BaseBdev4", 00:24:33.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.252 "is_configured": false, 00:24:33.252 "data_offset": 0, 00:24:33.252 "data_size": 0 00:24:33.252 } 00:24:33.252 ] 00:24:33.252 }' 00:24:33.252 02:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.252 02:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.818 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:34.077 [2024-07-11 02:30:24.400549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:34.077 BaseBdev3 00:24:34.077 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:34.077 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:34.077 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:34.077 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:34.077 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:34.077 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:34.077 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.335 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:34.593 [ 00:24:34.593 { 00:24:34.593 "name": "BaseBdev3", 00:24:34.593 "aliases": [ 00:24:34.593 "1d0f5866-68b0-4abc-957d-37ca782f2a7d" 00:24:34.593 ], 00:24:34.593 "product_name": "Malloc disk", 00:24:34.593 "block_size": 512, 00:24:34.593 "num_blocks": 65536, 00:24:34.593 "uuid": "1d0f5866-68b0-4abc-957d-37ca782f2a7d", 00:24:34.593 "assigned_rate_limits": { 00:24:34.593 "rw_ios_per_sec": 0, 00:24:34.593 "rw_mbytes_per_sec": 0, 00:24:34.593 "r_mbytes_per_sec": 0, 00:24:34.593 "w_mbytes_per_sec": 0 00:24:34.593 }, 00:24:34.593 "claimed": true, 00:24:34.593 "claim_type": "exclusive_write", 00:24:34.593 "zoned": false, 00:24:34.593 "supported_io_types": { 00:24:34.593 "read": true, 00:24:34.593 "write": true, 00:24:34.593 "unmap": true, 00:24:34.593 "flush": true, 00:24:34.593 "reset": true, 00:24:34.593 "nvme_admin": false, 00:24:34.593 "nvme_io": false, 00:24:34.593 "nvme_io_md": false, 00:24:34.593 "write_zeroes": true, 00:24:34.593 "zcopy": true, 00:24:34.593 "get_zone_info": false, 00:24:34.593 "zone_management": false, 00:24:34.593 "zone_append": false, 00:24:34.593 "compare": false, 00:24:34.593 "compare_and_write": false, 00:24:34.593 "abort": true, 00:24:34.593 "seek_hole": false, 00:24:34.593 "seek_data": false, 00:24:34.593 "copy": true, 00:24:34.593 "nvme_iov_md": false 00:24:34.593 }, 00:24:34.593 "memory_domains": [ 00:24:34.593 { 00:24:34.593 "dma_device_id": "system", 00:24:34.593 "dma_device_type": 1 00:24:34.593 }, 00:24:34.593 { 00:24:34.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.593 "dma_device_type": 2 00:24:34.593 } 00:24:34.593 ], 00:24:34.593 "driver_specific": {} 00:24:34.593 } 00:24:34.593 ] 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.593 02:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.851 02:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:34.851 "name": "Existed_Raid", 00:24:34.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.851 "strip_size_kb": 0, 00:24:34.851 "state": "configuring", 00:24:34.851 "raid_level": "raid1", 00:24:34.851 "superblock": false, 00:24:34.851 "num_base_bdevs": 4, 00:24:34.851 "num_base_bdevs_discovered": 3, 00:24:34.851 "num_base_bdevs_operational": 4, 00:24:34.851 "base_bdevs_list": [ 00:24:34.851 { 00:24:34.851 "name": "BaseBdev1", 00:24:34.851 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:34.851 "is_configured": true, 00:24:34.851 "data_offset": 0, 00:24:34.851 "data_size": 65536 00:24:34.851 }, 00:24:34.851 { 00:24:34.851 "name": "BaseBdev2", 00:24:34.851 "uuid": "b6b9ccea-a500-4a80-a50c-781b1a853ee0", 00:24:34.851 "is_configured": true, 00:24:34.851 "data_offset": 0, 00:24:34.851 "data_size": 65536 00:24:34.851 }, 00:24:34.851 { 00:24:34.851 "name": "BaseBdev3", 00:24:34.851 "uuid": "1d0f5866-68b0-4abc-957d-37ca782f2a7d", 00:24:34.851 "is_configured": true, 00:24:34.851 "data_offset": 0, 00:24:34.851 "data_size": 65536 00:24:34.851 }, 00:24:34.851 { 00:24:34.851 "name": "BaseBdev4", 00:24:34.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.851 "is_configured": false, 00:24:34.851 "data_offset": 0, 00:24:34.851 "data_size": 0 00:24:34.851 } 00:24:34.851 ] 00:24:34.851 }' 00:24:34.851 02:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:34.851 02:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.417 02:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:35.675 [2024-07-11 02:30:26.000192] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:35.675 [2024-07-11 02:30:26.000233] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1940f90 00:24:35.675 [2024-07-11 02:30:26.000241] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:35.675 [2024-07-11 02:30:26.000491] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17944c0 00:24:35.675 [2024-07-11 02:30:26.000616] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1940f90 00:24:35.675 [2024-07-11 02:30:26.000627] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1940f90 00:24:35.675 [2024-07-11 02:30:26.000799] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.675 BaseBdev4 00:24:35.675 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:35.675 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:35.675 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:35.675 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:35.675 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:35.675 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:35.676 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:35.933 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:36.191 [ 00:24:36.191 { 00:24:36.191 "name": "BaseBdev4", 00:24:36.191 "aliases": [ 00:24:36.191 "28fdaf4b-e1aa-4626-8e31-d822bf7acab1" 00:24:36.191 ], 00:24:36.191 "product_name": "Malloc disk", 00:24:36.191 "block_size": 512, 00:24:36.191 "num_blocks": 65536, 00:24:36.191 "uuid": "28fdaf4b-e1aa-4626-8e31-d822bf7acab1", 00:24:36.191 "assigned_rate_limits": { 00:24:36.191 "rw_ios_per_sec": 0, 00:24:36.191 "rw_mbytes_per_sec": 0, 00:24:36.191 "r_mbytes_per_sec": 0, 00:24:36.191 "w_mbytes_per_sec": 0 00:24:36.191 }, 00:24:36.191 "claimed": true, 00:24:36.191 "claim_type": "exclusive_write", 00:24:36.191 "zoned": false, 00:24:36.191 "supported_io_types": { 00:24:36.191 "read": true, 00:24:36.191 "write": true, 00:24:36.191 "unmap": true, 00:24:36.191 "flush": true, 00:24:36.191 "reset": true, 00:24:36.191 "nvme_admin": false, 00:24:36.191 "nvme_io": false, 00:24:36.191 "nvme_io_md": false, 00:24:36.191 "write_zeroes": true, 00:24:36.191 "zcopy": true, 00:24:36.191 "get_zone_info": false, 00:24:36.191 "zone_management": false, 00:24:36.191 "zone_append": false, 00:24:36.191 "compare": false, 00:24:36.191 "compare_and_write": false, 00:24:36.191 "abort": true, 00:24:36.191 "seek_hole": false, 00:24:36.191 "seek_data": false, 00:24:36.191 "copy": true, 00:24:36.191 "nvme_iov_md": false 00:24:36.191 }, 00:24:36.191 "memory_domains": [ 00:24:36.191 { 00:24:36.191 "dma_device_id": "system", 00:24:36.191 "dma_device_type": 1 00:24:36.191 }, 00:24:36.191 { 00:24:36.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.191 "dma_device_type": 2 00:24:36.191 } 00:24:36.191 ], 00:24:36.191 "driver_specific": {} 00:24:36.191 } 00:24:36.191 ] 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.191 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.449 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.449 "name": "Existed_Raid", 00:24:36.449 "uuid": "f52e63d5-79ea-43e2-9ab7-703e63cf8e16", 00:24:36.449 "strip_size_kb": 0, 00:24:36.449 "state": "online", 00:24:36.449 "raid_level": "raid1", 00:24:36.449 "superblock": false, 00:24:36.449 "num_base_bdevs": 4, 00:24:36.449 "num_base_bdevs_discovered": 4, 00:24:36.449 "num_base_bdevs_operational": 4, 00:24:36.449 "base_bdevs_list": [ 00:24:36.449 { 00:24:36.449 "name": "BaseBdev1", 00:24:36.449 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:36.449 "is_configured": true, 00:24:36.449 "data_offset": 0, 00:24:36.449 "data_size": 65536 00:24:36.449 }, 00:24:36.449 { 00:24:36.449 "name": "BaseBdev2", 00:24:36.449 "uuid": "b6b9ccea-a500-4a80-a50c-781b1a853ee0", 00:24:36.449 "is_configured": true, 00:24:36.449 "data_offset": 0, 00:24:36.449 "data_size": 65536 00:24:36.449 }, 00:24:36.449 { 00:24:36.449 "name": "BaseBdev3", 00:24:36.449 "uuid": "1d0f5866-68b0-4abc-957d-37ca782f2a7d", 00:24:36.449 "is_configured": true, 00:24:36.449 "data_offset": 0, 00:24:36.449 "data_size": 65536 00:24:36.449 }, 00:24:36.449 { 00:24:36.449 "name": "BaseBdev4", 00:24:36.449 "uuid": "28fdaf4b-e1aa-4626-8e31-d822bf7acab1", 00:24:36.449 "is_configured": true, 00:24:36.449 "data_offset": 0, 00:24:36.449 "data_size": 65536 00:24:36.449 } 00:24:36.449 ] 00:24:36.449 }' 00:24:36.449 02:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.449 02:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:37.015 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:37.273 [2024-07-11 02:30:27.576693] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.273 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:37.273 "name": "Existed_Raid", 00:24:37.273 "aliases": [ 00:24:37.273 "f52e63d5-79ea-43e2-9ab7-703e63cf8e16" 00:24:37.273 ], 00:24:37.273 "product_name": "Raid Volume", 00:24:37.273 "block_size": 512, 00:24:37.273 "num_blocks": 65536, 00:24:37.274 "uuid": "f52e63d5-79ea-43e2-9ab7-703e63cf8e16", 00:24:37.274 "assigned_rate_limits": { 00:24:37.274 "rw_ios_per_sec": 0, 00:24:37.274 "rw_mbytes_per_sec": 0, 00:24:37.274 "r_mbytes_per_sec": 0, 00:24:37.274 "w_mbytes_per_sec": 0 00:24:37.274 }, 00:24:37.274 "claimed": false, 00:24:37.274 "zoned": false, 00:24:37.274 "supported_io_types": { 00:24:37.274 "read": true, 00:24:37.274 "write": true, 00:24:37.274 "unmap": false, 00:24:37.274 "flush": false, 00:24:37.274 "reset": true, 00:24:37.274 "nvme_admin": false, 00:24:37.274 "nvme_io": false, 00:24:37.274 "nvme_io_md": false, 00:24:37.274 "write_zeroes": true, 00:24:37.274 "zcopy": false, 00:24:37.274 "get_zone_info": false, 00:24:37.274 "zone_management": false, 00:24:37.274 "zone_append": false, 00:24:37.274 "compare": false, 00:24:37.274 "compare_and_write": false, 00:24:37.274 "abort": false, 00:24:37.274 "seek_hole": false, 00:24:37.274 "seek_data": false, 00:24:37.274 "copy": false, 00:24:37.274 "nvme_iov_md": false 00:24:37.274 }, 00:24:37.274 "memory_domains": [ 00:24:37.274 { 00:24:37.274 "dma_device_id": "system", 00:24:37.274 "dma_device_type": 1 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.274 "dma_device_type": 2 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "dma_device_id": "system", 00:24:37.274 "dma_device_type": 1 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.274 "dma_device_type": 2 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "dma_device_id": "system", 00:24:37.274 "dma_device_type": 1 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.274 "dma_device_type": 2 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "dma_device_id": "system", 00:24:37.274 "dma_device_type": 1 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.274 "dma_device_type": 2 00:24:37.274 } 00:24:37.274 ], 00:24:37.274 "driver_specific": { 00:24:37.274 "raid": { 00:24:37.274 "uuid": "f52e63d5-79ea-43e2-9ab7-703e63cf8e16", 00:24:37.274 "strip_size_kb": 0, 00:24:37.274 "state": "online", 00:24:37.274 "raid_level": "raid1", 00:24:37.274 "superblock": false, 00:24:37.274 "num_base_bdevs": 4, 00:24:37.274 "num_base_bdevs_discovered": 4, 00:24:37.274 "num_base_bdevs_operational": 4, 00:24:37.274 "base_bdevs_list": [ 00:24:37.274 { 00:24:37.274 "name": "BaseBdev1", 00:24:37.274 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:37.274 "is_configured": true, 00:24:37.274 "data_offset": 0, 00:24:37.274 "data_size": 65536 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "name": "BaseBdev2", 00:24:37.274 "uuid": "b6b9ccea-a500-4a80-a50c-781b1a853ee0", 00:24:37.274 "is_configured": true, 00:24:37.274 "data_offset": 0, 00:24:37.274 "data_size": 65536 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "name": "BaseBdev3", 00:24:37.274 "uuid": "1d0f5866-68b0-4abc-957d-37ca782f2a7d", 00:24:37.274 "is_configured": true, 00:24:37.274 "data_offset": 0, 00:24:37.274 "data_size": 65536 00:24:37.274 }, 00:24:37.274 { 00:24:37.274 "name": "BaseBdev4", 00:24:37.274 "uuid": "28fdaf4b-e1aa-4626-8e31-d822bf7acab1", 00:24:37.274 "is_configured": true, 00:24:37.274 "data_offset": 0, 00:24:37.274 "data_size": 65536 00:24:37.274 } 00:24:37.274 ] 00:24:37.274 } 00:24:37.274 } 00:24:37.274 }' 00:24:37.274 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:37.274 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:37.274 BaseBdev2 00:24:37.274 BaseBdev3 00:24:37.274 BaseBdev4' 00:24:37.274 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:37.274 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:37.274 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:37.532 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:37.532 "name": "BaseBdev1", 00:24:37.532 "aliases": [ 00:24:37.532 "c55640e6-8197-48b2-affb-f261a453b7b5" 00:24:37.532 ], 00:24:37.532 "product_name": "Malloc disk", 00:24:37.532 "block_size": 512, 00:24:37.532 "num_blocks": 65536, 00:24:37.532 "uuid": "c55640e6-8197-48b2-affb-f261a453b7b5", 00:24:37.532 "assigned_rate_limits": { 00:24:37.532 "rw_ios_per_sec": 0, 00:24:37.532 "rw_mbytes_per_sec": 0, 00:24:37.532 "r_mbytes_per_sec": 0, 00:24:37.532 "w_mbytes_per_sec": 0 00:24:37.532 }, 00:24:37.532 "claimed": true, 00:24:37.532 "claim_type": "exclusive_write", 00:24:37.532 "zoned": false, 00:24:37.532 "supported_io_types": { 00:24:37.532 "read": true, 00:24:37.532 "write": true, 00:24:37.532 "unmap": true, 00:24:37.532 "flush": true, 00:24:37.532 "reset": true, 00:24:37.532 "nvme_admin": false, 00:24:37.532 "nvme_io": false, 00:24:37.532 "nvme_io_md": false, 00:24:37.532 "write_zeroes": true, 00:24:37.532 "zcopy": true, 00:24:37.532 "get_zone_info": false, 00:24:37.532 "zone_management": false, 00:24:37.532 "zone_append": false, 00:24:37.532 "compare": false, 00:24:37.532 "compare_and_write": false, 00:24:37.532 "abort": true, 00:24:37.532 "seek_hole": false, 00:24:37.532 "seek_data": false, 00:24:37.532 "copy": true, 00:24:37.532 "nvme_iov_md": false 00:24:37.532 }, 00:24:37.532 "memory_domains": [ 00:24:37.532 { 00:24:37.532 "dma_device_id": "system", 00:24:37.532 "dma_device_type": 1 00:24:37.532 }, 00:24:37.532 { 00:24:37.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.532 "dma_device_type": 2 00:24:37.532 } 00:24:37.532 ], 00:24:37.532 "driver_specific": {} 00:24:37.532 }' 00:24:37.532 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.532 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.790 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:37.790 02:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.790 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.790 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:37.790 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.790 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.790 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:37.790 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.790 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.048 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:38.048 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:38.048 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:38.048 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:38.306 "name": "BaseBdev2", 00:24:38.306 "aliases": [ 00:24:38.306 "b6b9ccea-a500-4a80-a50c-781b1a853ee0" 00:24:38.306 ], 00:24:38.306 "product_name": "Malloc disk", 00:24:38.306 "block_size": 512, 00:24:38.306 "num_blocks": 65536, 00:24:38.306 "uuid": "b6b9ccea-a500-4a80-a50c-781b1a853ee0", 00:24:38.306 "assigned_rate_limits": { 00:24:38.306 "rw_ios_per_sec": 0, 00:24:38.306 "rw_mbytes_per_sec": 0, 00:24:38.306 "r_mbytes_per_sec": 0, 00:24:38.306 "w_mbytes_per_sec": 0 00:24:38.306 }, 00:24:38.306 "claimed": true, 00:24:38.306 "claim_type": "exclusive_write", 00:24:38.306 "zoned": false, 00:24:38.306 "supported_io_types": { 00:24:38.306 "read": true, 00:24:38.306 "write": true, 00:24:38.306 "unmap": true, 00:24:38.306 "flush": true, 00:24:38.306 "reset": true, 00:24:38.306 "nvme_admin": false, 00:24:38.306 "nvme_io": false, 00:24:38.306 "nvme_io_md": false, 00:24:38.306 "write_zeroes": true, 00:24:38.306 "zcopy": true, 00:24:38.306 "get_zone_info": false, 00:24:38.306 "zone_management": false, 00:24:38.306 "zone_append": false, 00:24:38.306 "compare": false, 00:24:38.306 "compare_and_write": false, 00:24:38.306 "abort": true, 00:24:38.306 "seek_hole": false, 00:24:38.306 "seek_data": false, 00:24:38.306 "copy": true, 00:24:38.306 "nvme_iov_md": false 00:24:38.306 }, 00:24:38.306 "memory_domains": [ 00:24:38.306 { 00:24:38.306 "dma_device_id": "system", 00:24:38.306 "dma_device_type": 1 00:24:38.306 }, 00:24:38.306 { 00:24:38.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.306 "dma_device_type": 2 00:24:38.306 } 00:24:38.306 ], 00:24:38.306 "driver_specific": {} 00:24:38.306 }' 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:38.306 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:38.564 02:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:38.822 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:38.823 "name": "BaseBdev3", 00:24:38.823 "aliases": [ 00:24:38.823 "1d0f5866-68b0-4abc-957d-37ca782f2a7d" 00:24:38.823 ], 00:24:38.823 "product_name": "Malloc disk", 00:24:38.823 "block_size": 512, 00:24:38.823 "num_blocks": 65536, 00:24:38.823 "uuid": "1d0f5866-68b0-4abc-957d-37ca782f2a7d", 00:24:38.823 "assigned_rate_limits": { 00:24:38.823 "rw_ios_per_sec": 0, 00:24:38.823 "rw_mbytes_per_sec": 0, 00:24:38.823 "r_mbytes_per_sec": 0, 00:24:38.823 "w_mbytes_per_sec": 0 00:24:38.823 }, 00:24:38.823 "claimed": true, 00:24:38.823 "claim_type": "exclusive_write", 00:24:38.823 "zoned": false, 00:24:38.823 "supported_io_types": { 00:24:38.823 "read": true, 00:24:38.823 "write": true, 00:24:38.823 "unmap": true, 00:24:38.823 "flush": true, 00:24:38.823 "reset": true, 00:24:38.823 "nvme_admin": false, 00:24:38.823 "nvme_io": false, 00:24:38.823 "nvme_io_md": false, 00:24:38.823 "write_zeroes": true, 00:24:38.823 "zcopy": true, 00:24:38.823 "get_zone_info": false, 00:24:38.823 "zone_management": false, 00:24:38.823 "zone_append": false, 00:24:38.823 "compare": false, 00:24:38.823 "compare_and_write": false, 00:24:38.823 "abort": true, 00:24:38.823 "seek_hole": false, 00:24:38.823 "seek_data": false, 00:24:38.823 "copy": true, 00:24:38.823 "nvme_iov_md": false 00:24:38.823 }, 00:24:38.823 "memory_domains": [ 00:24:38.823 { 00:24:38.823 "dma_device_id": "system", 00:24:38.823 "dma_device_type": 1 00:24:38.823 }, 00:24:38.823 { 00:24:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.823 "dma_device_type": 2 00:24:38.823 } 00:24:38.823 ], 00:24:38.823 "driver_specific": {} 00:24:38.823 }' 00:24:38.823 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.823 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.823 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:38.823 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.081 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:39.340 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.340 "name": "BaseBdev4", 00:24:39.340 "aliases": [ 00:24:39.340 "28fdaf4b-e1aa-4626-8e31-d822bf7acab1" 00:24:39.340 ], 00:24:39.340 "product_name": "Malloc disk", 00:24:39.340 "block_size": 512, 00:24:39.340 "num_blocks": 65536, 00:24:39.340 "uuid": "28fdaf4b-e1aa-4626-8e31-d822bf7acab1", 00:24:39.340 "assigned_rate_limits": { 00:24:39.340 "rw_ios_per_sec": 0, 00:24:39.340 "rw_mbytes_per_sec": 0, 00:24:39.340 "r_mbytes_per_sec": 0, 00:24:39.340 "w_mbytes_per_sec": 0 00:24:39.340 }, 00:24:39.340 "claimed": true, 00:24:39.340 "claim_type": "exclusive_write", 00:24:39.340 "zoned": false, 00:24:39.340 "supported_io_types": { 00:24:39.340 "read": true, 00:24:39.340 "write": true, 00:24:39.340 "unmap": true, 00:24:39.340 "flush": true, 00:24:39.340 "reset": true, 00:24:39.340 "nvme_admin": false, 00:24:39.340 "nvme_io": false, 00:24:39.340 "nvme_io_md": false, 00:24:39.340 "write_zeroes": true, 00:24:39.340 "zcopy": true, 00:24:39.340 "get_zone_info": false, 00:24:39.340 "zone_management": false, 00:24:39.340 "zone_append": false, 00:24:39.340 "compare": false, 00:24:39.340 "compare_and_write": false, 00:24:39.340 "abort": true, 00:24:39.340 "seek_hole": false, 00:24:39.340 "seek_data": false, 00:24:39.340 "copy": true, 00:24:39.340 "nvme_iov_md": false 00:24:39.340 }, 00:24:39.340 "memory_domains": [ 00:24:39.340 { 00:24:39.340 "dma_device_id": "system", 00:24:39.340 "dma_device_type": 1 00:24:39.340 }, 00:24:39.340 { 00:24:39.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.340 "dma_device_type": 2 00:24:39.340 } 00:24:39.340 ], 00:24:39.340 "driver_specific": {} 00:24:39.340 }' 00:24:39.340 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.340 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.598 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.598 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.598 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.598 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.598 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.598 02:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.857 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.857 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.857 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.857 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.857 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:40.115 [2024-07-11 02:30:30.407946] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:40.115 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.116 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.374 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.374 "name": "Existed_Raid", 00:24:40.374 "uuid": "f52e63d5-79ea-43e2-9ab7-703e63cf8e16", 00:24:40.374 "strip_size_kb": 0, 00:24:40.374 "state": "online", 00:24:40.374 "raid_level": "raid1", 00:24:40.374 "superblock": false, 00:24:40.374 "num_base_bdevs": 4, 00:24:40.374 "num_base_bdevs_discovered": 3, 00:24:40.374 "num_base_bdevs_operational": 3, 00:24:40.374 "base_bdevs_list": [ 00:24:40.374 { 00:24:40.374 "name": null, 00:24:40.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.374 "is_configured": false, 00:24:40.374 "data_offset": 0, 00:24:40.374 "data_size": 65536 00:24:40.374 }, 00:24:40.374 { 00:24:40.374 "name": "BaseBdev2", 00:24:40.374 "uuid": "b6b9ccea-a500-4a80-a50c-781b1a853ee0", 00:24:40.374 "is_configured": true, 00:24:40.374 "data_offset": 0, 00:24:40.375 "data_size": 65536 00:24:40.375 }, 00:24:40.375 { 00:24:40.375 "name": "BaseBdev3", 00:24:40.375 "uuid": "1d0f5866-68b0-4abc-957d-37ca782f2a7d", 00:24:40.375 "is_configured": true, 00:24:40.375 "data_offset": 0, 00:24:40.375 "data_size": 65536 00:24:40.375 }, 00:24:40.375 { 00:24:40.375 "name": "BaseBdev4", 00:24:40.375 "uuid": "28fdaf4b-e1aa-4626-8e31-d822bf7acab1", 00:24:40.375 "is_configured": true, 00:24:40.375 "data_offset": 0, 00:24:40.375 "data_size": 65536 00:24:40.375 } 00:24:40.375 ] 00:24:40.375 }' 00:24:40.375 02:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.375 02:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.940 02:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:40.940 02:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:40.940 02:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.940 02:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:41.198 02:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:41.198 02:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:41.198 02:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:41.764 [2024-07-11 02:30:31.977954] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:41.764 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:41.764 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:41.764 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.764 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:42.022 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:42.022 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:42.022 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:42.588 [2024-07-11 02:30:32.750384] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:42.588 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:42.588 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:42.588 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.588 02:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:43.155 02:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:43.155 02:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:43.155 02:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:43.155 [2024-07-11 02:30:33.536603] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:43.155 [2024-07-11 02:30:33.536685] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:43.155 [2024-07-11 02:30:33.547270] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:43.155 [2024-07-11 02:30:33.547304] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:43.155 [2024-07-11 02:30:33.547323] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1940f90 name Existed_Raid, state offline 00:24:43.155 02:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:43.155 02:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:43.155 02:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.155 02:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:43.721 02:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:43.721 02:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:43.721 02:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:43.721 02:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:43.721 02:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:43.721 02:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:43.979 BaseBdev2 00:24:43.979 02:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:43.979 02:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:43.979 02:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:43.979 02:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:43.979 02:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:43.979 02:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:43.979 02:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:44.547 02:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:45.115 [ 00:24:45.116 { 00:24:45.116 "name": "BaseBdev2", 00:24:45.116 "aliases": [ 00:24:45.116 "55e5a8d2-281a-4add-be1f-c26ad0c1e0af" 00:24:45.116 ], 00:24:45.116 "product_name": "Malloc disk", 00:24:45.116 "block_size": 512, 00:24:45.116 "num_blocks": 65536, 00:24:45.116 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:45.116 "assigned_rate_limits": { 00:24:45.116 "rw_ios_per_sec": 0, 00:24:45.116 "rw_mbytes_per_sec": 0, 00:24:45.116 "r_mbytes_per_sec": 0, 00:24:45.116 "w_mbytes_per_sec": 0 00:24:45.116 }, 00:24:45.116 "claimed": false, 00:24:45.116 "zoned": false, 00:24:45.116 "supported_io_types": { 00:24:45.116 "read": true, 00:24:45.116 "write": true, 00:24:45.116 "unmap": true, 00:24:45.116 "flush": true, 00:24:45.116 "reset": true, 00:24:45.116 "nvme_admin": false, 00:24:45.116 "nvme_io": false, 00:24:45.116 "nvme_io_md": false, 00:24:45.116 "write_zeroes": true, 00:24:45.116 "zcopy": true, 00:24:45.116 "get_zone_info": false, 00:24:45.116 "zone_management": false, 00:24:45.116 "zone_append": false, 00:24:45.116 "compare": false, 00:24:45.116 "compare_and_write": false, 00:24:45.116 "abort": true, 00:24:45.116 "seek_hole": false, 00:24:45.116 "seek_data": false, 00:24:45.116 "copy": true, 00:24:45.116 "nvme_iov_md": false 00:24:45.116 }, 00:24:45.116 "memory_domains": [ 00:24:45.116 { 00:24:45.116 "dma_device_id": "system", 00:24:45.116 "dma_device_type": 1 00:24:45.116 }, 00:24:45.116 { 00:24:45.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.116 "dma_device_type": 2 00:24:45.116 } 00:24:45.116 ], 00:24:45.116 "driver_specific": {} 00:24:45.116 } 00:24:45.116 ] 00:24:45.116 02:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:45.116 02:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:45.116 02:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:45.116 02:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:45.375 BaseBdev3 00:24:45.375 02:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:45.375 02:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:45.375 02:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:45.375 02:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:45.375 02:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:45.375 02:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:45.375 02:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:45.942 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:45.942 [ 00:24:45.942 { 00:24:45.942 "name": "BaseBdev3", 00:24:45.942 "aliases": [ 00:24:45.942 "37a838bf-972a-4b4f-833c-298427e2021e" 00:24:45.942 ], 00:24:45.942 "product_name": "Malloc disk", 00:24:45.942 "block_size": 512, 00:24:45.942 "num_blocks": 65536, 00:24:45.942 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:45.942 "assigned_rate_limits": { 00:24:45.942 "rw_ios_per_sec": 0, 00:24:45.942 "rw_mbytes_per_sec": 0, 00:24:45.942 "r_mbytes_per_sec": 0, 00:24:45.942 "w_mbytes_per_sec": 0 00:24:45.942 }, 00:24:45.942 "claimed": false, 00:24:45.942 "zoned": false, 00:24:45.942 "supported_io_types": { 00:24:45.942 "read": true, 00:24:45.942 "write": true, 00:24:45.942 "unmap": true, 00:24:45.942 "flush": true, 00:24:45.942 "reset": true, 00:24:45.942 "nvme_admin": false, 00:24:45.942 "nvme_io": false, 00:24:45.942 "nvme_io_md": false, 00:24:45.942 "write_zeroes": true, 00:24:45.942 "zcopy": true, 00:24:45.942 "get_zone_info": false, 00:24:45.942 "zone_management": false, 00:24:45.942 "zone_append": false, 00:24:45.942 "compare": false, 00:24:45.942 "compare_and_write": false, 00:24:45.942 "abort": true, 00:24:45.942 "seek_hole": false, 00:24:45.942 "seek_data": false, 00:24:45.942 "copy": true, 00:24:45.942 "nvme_iov_md": false 00:24:45.942 }, 00:24:45.942 "memory_domains": [ 00:24:45.942 { 00:24:45.942 "dma_device_id": "system", 00:24:45.942 "dma_device_type": 1 00:24:45.942 }, 00:24:45.942 { 00:24:45.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.942 "dma_device_type": 2 00:24:45.942 } 00:24:45.942 ], 00:24:45.942 "driver_specific": {} 00:24:45.942 } 00:24:45.942 ] 00:24:46.201 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:46.201 02:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:46.201 02:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:46.201 02:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:46.460 BaseBdev4 00:24:46.718 02:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:46.719 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:46.719 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:46.719 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:46.719 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:46.719 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:46.719 02:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:47.287 02:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:47.287 [ 00:24:47.287 { 00:24:47.287 "name": "BaseBdev4", 00:24:47.287 "aliases": [ 00:24:47.287 "3a6ee363-d928-4d76-914c-4256b3fe50c0" 00:24:47.287 ], 00:24:47.287 "product_name": "Malloc disk", 00:24:47.287 "block_size": 512, 00:24:47.287 "num_blocks": 65536, 00:24:47.287 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:47.287 "assigned_rate_limits": { 00:24:47.287 "rw_ios_per_sec": 0, 00:24:47.287 "rw_mbytes_per_sec": 0, 00:24:47.287 "r_mbytes_per_sec": 0, 00:24:47.287 "w_mbytes_per_sec": 0 00:24:47.287 }, 00:24:47.287 "claimed": false, 00:24:47.287 "zoned": false, 00:24:47.287 "supported_io_types": { 00:24:47.287 "read": true, 00:24:47.287 "write": true, 00:24:47.287 "unmap": true, 00:24:47.287 "flush": true, 00:24:47.287 "reset": true, 00:24:47.287 "nvme_admin": false, 00:24:47.287 "nvme_io": false, 00:24:47.287 "nvme_io_md": false, 00:24:47.287 "write_zeroes": true, 00:24:47.287 "zcopy": true, 00:24:47.287 "get_zone_info": false, 00:24:47.287 "zone_management": false, 00:24:47.287 "zone_append": false, 00:24:47.287 "compare": false, 00:24:47.287 "compare_and_write": false, 00:24:47.287 "abort": true, 00:24:47.287 "seek_hole": false, 00:24:47.287 "seek_data": false, 00:24:47.287 "copy": true, 00:24:47.287 "nvme_iov_md": false 00:24:47.287 }, 00:24:47.287 "memory_domains": [ 00:24:47.287 { 00:24:47.287 "dma_device_id": "system", 00:24:47.287 "dma_device_type": 1 00:24:47.287 }, 00:24:47.287 { 00:24:47.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.287 "dma_device_type": 2 00:24:47.287 } 00:24:47.287 ], 00:24:47.287 "driver_specific": {} 00:24:47.287 } 00:24:47.287 ] 00:24:47.287 02:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:47.287 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:47.287 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:47.287 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:47.545 [2024-07-11 02:30:37.896712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:47.546 [2024-07-11 02:30:37.896755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:47.546 [2024-07-11 02:30:37.896781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:47.546 [2024-07-11 02:30:37.898062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:47.546 [2024-07-11 02:30:37.898104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.546 02:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.805 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:47.805 "name": "Existed_Raid", 00:24:47.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.805 "strip_size_kb": 0, 00:24:47.805 "state": "configuring", 00:24:47.805 "raid_level": "raid1", 00:24:47.805 "superblock": false, 00:24:47.805 "num_base_bdevs": 4, 00:24:47.805 "num_base_bdevs_discovered": 3, 00:24:47.805 "num_base_bdevs_operational": 4, 00:24:47.805 "base_bdevs_list": [ 00:24:47.805 { 00:24:47.805 "name": "BaseBdev1", 00:24:47.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.805 "is_configured": false, 00:24:47.805 "data_offset": 0, 00:24:47.805 "data_size": 0 00:24:47.805 }, 00:24:47.805 { 00:24:47.805 "name": "BaseBdev2", 00:24:47.805 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:47.805 "is_configured": true, 00:24:47.805 "data_offset": 0, 00:24:47.805 "data_size": 65536 00:24:47.805 }, 00:24:47.805 { 00:24:47.805 "name": "BaseBdev3", 00:24:47.805 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:47.805 "is_configured": true, 00:24:47.805 "data_offset": 0, 00:24:47.805 "data_size": 65536 00:24:47.805 }, 00:24:47.805 { 00:24:47.805 "name": "BaseBdev4", 00:24:47.805 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:47.805 "is_configured": true, 00:24:47.805 "data_offset": 0, 00:24:47.805 "data_size": 65536 00:24:47.805 } 00:24:47.805 ] 00:24:47.805 }' 00:24:47.805 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:47.805 02:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.373 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:48.632 [2024-07-11 02:30:38.879303] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.632 02:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.892 02:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.892 "name": "Existed_Raid", 00:24:48.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.892 "strip_size_kb": 0, 00:24:48.892 "state": "configuring", 00:24:48.892 "raid_level": "raid1", 00:24:48.892 "superblock": false, 00:24:48.892 "num_base_bdevs": 4, 00:24:48.892 "num_base_bdevs_discovered": 2, 00:24:48.892 "num_base_bdevs_operational": 4, 00:24:48.892 "base_bdevs_list": [ 00:24:48.892 { 00:24:48.892 "name": "BaseBdev1", 00:24:48.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.892 "is_configured": false, 00:24:48.892 "data_offset": 0, 00:24:48.892 "data_size": 0 00:24:48.892 }, 00:24:48.892 { 00:24:48.892 "name": null, 00:24:48.892 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:48.892 "is_configured": false, 00:24:48.892 "data_offset": 0, 00:24:48.892 "data_size": 65536 00:24:48.892 }, 00:24:48.892 { 00:24:48.892 "name": "BaseBdev3", 00:24:48.892 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:48.892 "is_configured": true, 00:24:48.892 "data_offset": 0, 00:24:48.892 "data_size": 65536 00:24:48.892 }, 00:24:48.892 { 00:24:48.892 "name": "BaseBdev4", 00:24:48.892 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:48.892 "is_configured": true, 00:24:48.892 "data_offset": 0, 00:24:48.892 "data_size": 65536 00:24:48.892 } 00:24:48.892 ] 00:24:48.892 }' 00:24:48.892 02:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.892 02:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.510 02:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.510 02:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:49.839 02:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:49.839 02:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:49.839 [2024-07-11 02:30:40.182038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.839 BaseBdev1 00:24:49.839 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:49.839 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:49.839 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:49.839 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:49.839 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:49.839 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:49.839 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:50.097 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:50.356 [ 00:24:50.356 { 00:24:50.356 "name": "BaseBdev1", 00:24:50.356 "aliases": [ 00:24:50.356 "fb6762f1-5a61-4fbe-b213-4f08ac12e780" 00:24:50.356 ], 00:24:50.356 "product_name": "Malloc disk", 00:24:50.356 "block_size": 512, 00:24:50.356 "num_blocks": 65536, 00:24:50.356 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:50.356 "assigned_rate_limits": { 00:24:50.356 "rw_ios_per_sec": 0, 00:24:50.356 "rw_mbytes_per_sec": 0, 00:24:50.356 "r_mbytes_per_sec": 0, 00:24:50.356 "w_mbytes_per_sec": 0 00:24:50.356 }, 00:24:50.356 "claimed": true, 00:24:50.356 "claim_type": "exclusive_write", 00:24:50.356 "zoned": false, 00:24:50.356 "supported_io_types": { 00:24:50.356 "read": true, 00:24:50.356 "write": true, 00:24:50.356 "unmap": true, 00:24:50.356 "flush": true, 00:24:50.356 "reset": true, 00:24:50.356 "nvme_admin": false, 00:24:50.356 "nvme_io": false, 00:24:50.356 "nvme_io_md": false, 00:24:50.357 "write_zeroes": true, 00:24:50.357 "zcopy": true, 00:24:50.357 "get_zone_info": false, 00:24:50.357 "zone_management": false, 00:24:50.357 "zone_append": false, 00:24:50.357 "compare": false, 00:24:50.357 "compare_and_write": false, 00:24:50.357 "abort": true, 00:24:50.357 "seek_hole": false, 00:24:50.357 "seek_data": false, 00:24:50.357 "copy": true, 00:24:50.357 "nvme_iov_md": false 00:24:50.357 }, 00:24:50.357 "memory_domains": [ 00:24:50.357 { 00:24:50.357 "dma_device_id": "system", 00:24:50.357 "dma_device_type": 1 00:24:50.357 }, 00:24:50.357 { 00:24:50.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.357 "dma_device_type": 2 00:24:50.357 } 00:24:50.357 ], 00:24:50.357 "driver_specific": {} 00:24:50.357 } 00:24:50.357 ] 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.357 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.615 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.615 "name": "Existed_Raid", 00:24:50.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.615 "strip_size_kb": 0, 00:24:50.615 "state": "configuring", 00:24:50.615 "raid_level": "raid1", 00:24:50.615 "superblock": false, 00:24:50.615 "num_base_bdevs": 4, 00:24:50.615 "num_base_bdevs_discovered": 3, 00:24:50.615 "num_base_bdevs_operational": 4, 00:24:50.616 "base_bdevs_list": [ 00:24:50.616 { 00:24:50.616 "name": "BaseBdev1", 00:24:50.616 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:50.616 "is_configured": true, 00:24:50.616 "data_offset": 0, 00:24:50.616 "data_size": 65536 00:24:50.616 }, 00:24:50.616 { 00:24:50.616 "name": null, 00:24:50.616 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:50.616 "is_configured": false, 00:24:50.616 "data_offset": 0, 00:24:50.616 "data_size": 65536 00:24:50.616 }, 00:24:50.616 { 00:24:50.616 "name": "BaseBdev3", 00:24:50.616 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:50.616 "is_configured": true, 00:24:50.616 "data_offset": 0, 00:24:50.616 "data_size": 65536 00:24:50.616 }, 00:24:50.616 { 00:24:50.616 "name": "BaseBdev4", 00:24:50.616 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:50.616 "is_configured": true, 00:24:50.616 "data_offset": 0, 00:24:50.616 "data_size": 65536 00:24:50.616 } 00:24:50.616 ] 00:24:50.616 }' 00:24:50.616 02:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.616 02:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.183 02:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.183 02:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:51.442 02:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:51.442 02:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:51.701 [2024-07-11 02:30:42.006925] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.701 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.961 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.961 "name": "Existed_Raid", 00:24:51.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.961 "strip_size_kb": 0, 00:24:51.961 "state": "configuring", 00:24:51.961 "raid_level": "raid1", 00:24:51.961 "superblock": false, 00:24:51.961 "num_base_bdevs": 4, 00:24:51.961 "num_base_bdevs_discovered": 2, 00:24:51.961 "num_base_bdevs_operational": 4, 00:24:51.961 "base_bdevs_list": [ 00:24:51.961 { 00:24:51.961 "name": "BaseBdev1", 00:24:51.961 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:51.961 "is_configured": true, 00:24:51.961 "data_offset": 0, 00:24:51.961 "data_size": 65536 00:24:51.961 }, 00:24:51.961 { 00:24:51.961 "name": null, 00:24:51.961 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:51.961 "is_configured": false, 00:24:51.961 "data_offset": 0, 00:24:51.961 "data_size": 65536 00:24:51.961 }, 00:24:51.961 { 00:24:51.961 "name": null, 00:24:51.961 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:51.961 "is_configured": false, 00:24:51.961 "data_offset": 0, 00:24:51.961 "data_size": 65536 00:24:51.961 }, 00:24:51.961 { 00:24:51.961 "name": "BaseBdev4", 00:24:51.961 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:51.961 "is_configured": true, 00:24:51.961 "data_offset": 0, 00:24:51.961 "data_size": 65536 00:24:51.961 } 00:24:51.961 ] 00:24:51.961 }' 00:24:51.961 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.961 02:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.529 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.529 02:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:52.789 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:52.789 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:53.048 [2024-07-11 02:30:43.350524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.048 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.307 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.307 "name": "Existed_Raid", 00:24:53.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.307 "strip_size_kb": 0, 00:24:53.307 "state": "configuring", 00:24:53.307 "raid_level": "raid1", 00:24:53.307 "superblock": false, 00:24:53.307 "num_base_bdevs": 4, 00:24:53.307 "num_base_bdevs_discovered": 3, 00:24:53.307 "num_base_bdevs_operational": 4, 00:24:53.307 "base_bdevs_list": [ 00:24:53.307 { 00:24:53.307 "name": "BaseBdev1", 00:24:53.307 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:53.307 "is_configured": true, 00:24:53.307 "data_offset": 0, 00:24:53.307 "data_size": 65536 00:24:53.307 }, 00:24:53.307 { 00:24:53.307 "name": null, 00:24:53.307 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:53.307 "is_configured": false, 00:24:53.307 "data_offset": 0, 00:24:53.307 "data_size": 65536 00:24:53.307 }, 00:24:53.307 { 00:24:53.307 "name": "BaseBdev3", 00:24:53.307 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:53.307 "is_configured": true, 00:24:53.307 "data_offset": 0, 00:24:53.307 "data_size": 65536 00:24:53.307 }, 00:24:53.307 { 00:24:53.307 "name": "BaseBdev4", 00:24:53.307 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:53.307 "is_configured": true, 00:24:53.307 "data_offset": 0, 00:24:53.307 "data_size": 65536 00:24:53.307 } 00:24:53.307 ] 00:24:53.307 }' 00:24:53.307 02:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.307 02:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.874 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.874 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:54.134 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:54.134 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:54.393 [2024-07-11 02:30:44.577783] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.393 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.652 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.652 "name": "Existed_Raid", 00:24:54.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.652 "strip_size_kb": 0, 00:24:54.652 "state": "configuring", 00:24:54.652 "raid_level": "raid1", 00:24:54.652 "superblock": false, 00:24:54.652 "num_base_bdevs": 4, 00:24:54.652 "num_base_bdevs_discovered": 2, 00:24:54.652 "num_base_bdevs_operational": 4, 00:24:54.652 "base_bdevs_list": [ 00:24:54.652 { 00:24:54.652 "name": null, 00:24:54.652 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:54.652 "is_configured": false, 00:24:54.652 "data_offset": 0, 00:24:54.652 "data_size": 65536 00:24:54.652 }, 00:24:54.652 { 00:24:54.652 "name": null, 00:24:54.652 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:54.652 "is_configured": false, 00:24:54.652 "data_offset": 0, 00:24:54.652 "data_size": 65536 00:24:54.652 }, 00:24:54.652 { 00:24:54.652 "name": "BaseBdev3", 00:24:54.652 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:54.652 "is_configured": true, 00:24:54.652 "data_offset": 0, 00:24:54.652 "data_size": 65536 00:24:54.652 }, 00:24:54.652 { 00:24:54.652 "name": "BaseBdev4", 00:24:54.652 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:54.652 "is_configured": true, 00:24:54.652 "data_offset": 0, 00:24:54.652 "data_size": 65536 00:24:54.652 } 00:24:54.652 ] 00:24:54.652 }' 00:24:54.652 02:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.652 02:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.220 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.220 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:55.479 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:55.479 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:55.479 [2024-07-11 02:30:45.879609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.739 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:55.739 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:55.739 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:55.739 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:55.739 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:55.739 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:55.740 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.740 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.740 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.740 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.740 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.740 02:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.740 02:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.740 "name": "Existed_Raid", 00:24:55.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.740 "strip_size_kb": 0, 00:24:55.740 "state": "configuring", 00:24:55.740 "raid_level": "raid1", 00:24:55.740 "superblock": false, 00:24:55.740 "num_base_bdevs": 4, 00:24:55.740 "num_base_bdevs_discovered": 3, 00:24:55.740 "num_base_bdevs_operational": 4, 00:24:55.740 "base_bdevs_list": [ 00:24:55.740 { 00:24:55.740 "name": null, 00:24:55.740 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:55.740 "is_configured": false, 00:24:55.740 "data_offset": 0, 00:24:55.740 "data_size": 65536 00:24:55.740 }, 00:24:55.740 { 00:24:55.740 "name": "BaseBdev2", 00:24:55.740 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:55.740 "is_configured": true, 00:24:55.740 "data_offset": 0, 00:24:55.740 "data_size": 65536 00:24:55.740 }, 00:24:55.740 { 00:24:55.740 "name": "BaseBdev3", 00:24:55.740 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:55.740 "is_configured": true, 00:24:55.740 "data_offset": 0, 00:24:55.740 "data_size": 65536 00:24:55.740 }, 00:24:55.740 { 00:24:55.740 "name": "BaseBdev4", 00:24:55.740 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:55.740 "is_configured": true, 00:24:55.740 "data_offset": 0, 00:24:55.740 "data_size": 65536 00:24:55.740 } 00:24:55.740 ] 00:24:55.740 }' 00:24:55.740 02:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.740 02:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.676 02:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.676 02:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:56.676 02:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:56.676 02:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.676 02:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:56.935 02:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fb6762f1-5a61-4fbe-b213-4f08ac12e780 00:24:57.195 [2024-07-11 02:30:47.535232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:57.195 [2024-07-11 02:30:47.535272] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17942f0 00:24:57.195 [2024-07-11 02:30:47.535280] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:57.195 [2024-07-11 02:30:47.535471] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17a6090 00:24:57.195 [2024-07-11 02:30:47.535593] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17942f0 00:24:57.195 [2024-07-11 02:30:47.535603] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17942f0 00:24:57.195 [2024-07-11 02:30:47.535769] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.195 NewBaseBdev 00:24:57.195 02:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:57.195 02:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:57.195 02:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:57.195 02:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:57.195 02:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:57.195 02:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:57.195 02:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:57.454 02:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:57.714 [ 00:24:57.714 { 00:24:57.714 "name": "NewBaseBdev", 00:24:57.714 "aliases": [ 00:24:57.714 "fb6762f1-5a61-4fbe-b213-4f08ac12e780" 00:24:57.714 ], 00:24:57.714 "product_name": "Malloc disk", 00:24:57.714 "block_size": 512, 00:24:57.714 "num_blocks": 65536, 00:24:57.714 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:57.714 "assigned_rate_limits": { 00:24:57.714 "rw_ios_per_sec": 0, 00:24:57.714 "rw_mbytes_per_sec": 0, 00:24:57.714 "r_mbytes_per_sec": 0, 00:24:57.714 "w_mbytes_per_sec": 0 00:24:57.714 }, 00:24:57.714 "claimed": true, 00:24:57.714 "claim_type": "exclusive_write", 00:24:57.714 "zoned": false, 00:24:57.714 "supported_io_types": { 00:24:57.714 "read": true, 00:24:57.714 "write": true, 00:24:57.714 "unmap": true, 00:24:57.714 "flush": true, 00:24:57.714 "reset": true, 00:24:57.714 "nvme_admin": false, 00:24:57.714 "nvme_io": false, 00:24:57.714 "nvme_io_md": false, 00:24:57.714 "write_zeroes": true, 00:24:57.714 "zcopy": true, 00:24:57.714 "get_zone_info": false, 00:24:57.714 "zone_management": false, 00:24:57.714 "zone_append": false, 00:24:57.714 "compare": false, 00:24:57.714 "compare_and_write": false, 00:24:57.714 "abort": true, 00:24:57.714 "seek_hole": false, 00:24:57.714 "seek_data": false, 00:24:57.714 "copy": true, 00:24:57.714 "nvme_iov_md": false 00:24:57.714 }, 00:24:57.714 "memory_domains": [ 00:24:57.714 { 00:24:57.714 "dma_device_id": "system", 00:24:57.714 "dma_device_type": 1 00:24:57.714 }, 00:24:57.714 { 00:24:57.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.714 "dma_device_type": 2 00:24:57.714 } 00:24:57.714 ], 00:24:57.714 "driver_specific": {} 00:24:57.714 } 00:24:57.714 ] 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.714 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.974 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.974 "name": "Existed_Raid", 00:24:57.974 "uuid": "f8f1fd3e-f25b-4009-9f5f-7b93720fbe0f", 00:24:57.974 "strip_size_kb": 0, 00:24:57.974 "state": "online", 00:24:57.974 "raid_level": "raid1", 00:24:57.974 "superblock": false, 00:24:57.974 "num_base_bdevs": 4, 00:24:57.974 "num_base_bdevs_discovered": 4, 00:24:57.974 "num_base_bdevs_operational": 4, 00:24:57.974 "base_bdevs_list": [ 00:24:57.974 { 00:24:57.974 "name": "NewBaseBdev", 00:24:57.974 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:57.974 "is_configured": true, 00:24:57.974 "data_offset": 0, 00:24:57.974 "data_size": 65536 00:24:57.974 }, 00:24:57.974 { 00:24:57.974 "name": "BaseBdev2", 00:24:57.974 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:57.974 "is_configured": true, 00:24:57.974 "data_offset": 0, 00:24:57.974 "data_size": 65536 00:24:57.974 }, 00:24:57.974 { 00:24:57.974 "name": "BaseBdev3", 00:24:57.974 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:57.974 "is_configured": true, 00:24:57.974 "data_offset": 0, 00:24:57.974 "data_size": 65536 00:24:57.974 }, 00:24:57.974 { 00:24:57.974 "name": "BaseBdev4", 00:24:57.974 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:57.974 "is_configured": true, 00:24:57.974 "data_offset": 0, 00:24:57.974 "data_size": 65536 00:24:57.974 } 00:24:57.974 ] 00:24:57.974 }' 00:24:57.974 02:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.974 02:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:58.912 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:59.171 [2024-07-11 02:30:49.480721] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:59.171 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:59.171 "name": "Existed_Raid", 00:24:59.171 "aliases": [ 00:24:59.171 "f8f1fd3e-f25b-4009-9f5f-7b93720fbe0f" 00:24:59.171 ], 00:24:59.171 "product_name": "Raid Volume", 00:24:59.171 "block_size": 512, 00:24:59.171 "num_blocks": 65536, 00:24:59.171 "uuid": "f8f1fd3e-f25b-4009-9f5f-7b93720fbe0f", 00:24:59.171 "assigned_rate_limits": { 00:24:59.171 "rw_ios_per_sec": 0, 00:24:59.171 "rw_mbytes_per_sec": 0, 00:24:59.171 "r_mbytes_per_sec": 0, 00:24:59.171 "w_mbytes_per_sec": 0 00:24:59.171 }, 00:24:59.171 "claimed": false, 00:24:59.171 "zoned": false, 00:24:59.171 "supported_io_types": { 00:24:59.171 "read": true, 00:24:59.171 "write": true, 00:24:59.171 "unmap": false, 00:24:59.171 "flush": false, 00:24:59.171 "reset": true, 00:24:59.171 "nvme_admin": false, 00:24:59.171 "nvme_io": false, 00:24:59.171 "nvme_io_md": false, 00:24:59.171 "write_zeroes": true, 00:24:59.171 "zcopy": false, 00:24:59.171 "get_zone_info": false, 00:24:59.171 "zone_management": false, 00:24:59.171 "zone_append": false, 00:24:59.171 "compare": false, 00:24:59.171 "compare_and_write": false, 00:24:59.171 "abort": false, 00:24:59.171 "seek_hole": false, 00:24:59.171 "seek_data": false, 00:24:59.171 "copy": false, 00:24:59.171 "nvme_iov_md": false 00:24:59.171 }, 00:24:59.171 "memory_domains": [ 00:24:59.171 { 00:24:59.171 "dma_device_id": "system", 00:24:59.171 "dma_device_type": 1 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.171 "dma_device_type": 2 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "dma_device_id": "system", 00:24:59.171 "dma_device_type": 1 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.171 "dma_device_type": 2 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "dma_device_id": "system", 00:24:59.171 "dma_device_type": 1 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.171 "dma_device_type": 2 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "dma_device_id": "system", 00:24:59.171 "dma_device_type": 1 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.171 "dma_device_type": 2 00:24:59.171 } 00:24:59.171 ], 00:24:59.171 "driver_specific": { 00:24:59.171 "raid": { 00:24:59.171 "uuid": "f8f1fd3e-f25b-4009-9f5f-7b93720fbe0f", 00:24:59.171 "strip_size_kb": 0, 00:24:59.171 "state": "online", 00:24:59.171 "raid_level": "raid1", 00:24:59.171 "superblock": false, 00:24:59.171 "num_base_bdevs": 4, 00:24:59.171 "num_base_bdevs_discovered": 4, 00:24:59.171 "num_base_bdevs_operational": 4, 00:24:59.171 "base_bdevs_list": [ 00:24:59.171 { 00:24:59.171 "name": "NewBaseBdev", 00:24:59.171 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:59.171 "is_configured": true, 00:24:59.171 "data_offset": 0, 00:24:59.171 "data_size": 65536 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "name": "BaseBdev2", 00:24:59.171 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:24:59.171 "is_configured": true, 00:24:59.171 "data_offset": 0, 00:24:59.171 "data_size": 65536 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "name": "BaseBdev3", 00:24:59.171 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:24:59.171 "is_configured": true, 00:24:59.171 "data_offset": 0, 00:24:59.171 "data_size": 65536 00:24:59.171 }, 00:24:59.171 { 00:24:59.171 "name": "BaseBdev4", 00:24:59.171 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:24:59.171 "is_configured": true, 00:24:59.171 "data_offset": 0, 00:24:59.171 "data_size": 65536 00:24:59.171 } 00:24:59.171 ] 00:24:59.171 } 00:24:59.171 } 00:24:59.171 }' 00:24:59.171 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:59.171 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:59.171 BaseBdev2 00:24:59.171 BaseBdev3 00:24:59.171 BaseBdev4' 00:24:59.171 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:59.171 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:59.171 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:59.430 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:59.430 "name": "NewBaseBdev", 00:24:59.430 "aliases": [ 00:24:59.430 "fb6762f1-5a61-4fbe-b213-4f08ac12e780" 00:24:59.430 ], 00:24:59.430 "product_name": "Malloc disk", 00:24:59.430 "block_size": 512, 00:24:59.430 "num_blocks": 65536, 00:24:59.431 "uuid": "fb6762f1-5a61-4fbe-b213-4f08ac12e780", 00:24:59.431 "assigned_rate_limits": { 00:24:59.431 "rw_ios_per_sec": 0, 00:24:59.431 "rw_mbytes_per_sec": 0, 00:24:59.431 "r_mbytes_per_sec": 0, 00:24:59.431 "w_mbytes_per_sec": 0 00:24:59.431 }, 00:24:59.431 "claimed": true, 00:24:59.431 "claim_type": "exclusive_write", 00:24:59.431 "zoned": false, 00:24:59.431 "supported_io_types": { 00:24:59.431 "read": true, 00:24:59.431 "write": true, 00:24:59.431 "unmap": true, 00:24:59.431 "flush": true, 00:24:59.431 "reset": true, 00:24:59.431 "nvme_admin": false, 00:24:59.431 "nvme_io": false, 00:24:59.431 "nvme_io_md": false, 00:24:59.431 "write_zeroes": true, 00:24:59.431 "zcopy": true, 00:24:59.431 "get_zone_info": false, 00:24:59.431 "zone_management": false, 00:24:59.431 "zone_append": false, 00:24:59.431 "compare": false, 00:24:59.431 "compare_and_write": false, 00:24:59.431 "abort": true, 00:24:59.431 "seek_hole": false, 00:24:59.431 "seek_data": false, 00:24:59.431 "copy": true, 00:24:59.431 "nvme_iov_md": false 00:24:59.431 }, 00:24:59.431 "memory_domains": [ 00:24:59.431 { 00:24:59.431 "dma_device_id": "system", 00:24:59.431 "dma_device_type": 1 00:24:59.431 }, 00:24:59.431 { 00:24:59.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.431 "dma_device_type": 2 00:24:59.431 } 00:24:59.431 ], 00:24:59.431 "driver_specific": {} 00:24:59.431 }' 00:24:59.431 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.691 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.691 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:59.691 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.691 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.691 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:59.691 02:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.691 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.691 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:59.691 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.950 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.950 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:59.950 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:59.950 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:59.950 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:00.209 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:00.209 "name": "BaseBdev2", 00:25:00.209 "aliases": [ 00:25:00.209 "55e5a8d2-281a-4add-be1f-c26ad0c1e0af" 00:25:00.209 ], 00:25:00.209 "product_name": "Malloc disk", 00:25:00.209 "block_size": 512, 00:25:00.209 "num_blocks": 65536, 00:25:00.209 "uuid": "55e5a8d2-281a-4add-be1f-c26ad0c1e0af", 00:25:00.209 "assigned_rate_limits": { 00:25:00.209 "rw_ios_per_sec": 0, 00:25:00.209 "rw_mbytes_per_sec": 0, 00:25:00.209 "r_mbytes_per_sec": 0, 00:25:00.209 "w_mbytes_per_sec": 0 00:25:00.209 }, 00:25:00.209 "claimed": true, 00:25:00.209 "claim_type": "exclusive_write", 00:25:00.209 "zoned": false, 00:25:00.209 "supported_io_types": { 00:25:00.209 "read": true, 00:25:00.209 "write": true, 00:25:00.209 "unmap": true, 00:25:00.209 "flush": true, 00:25:00.209 "reset": true, 00:25:00.209 "nvme_admin": false, 00:25:00.209 "nvme_io": false, 00:25:00.209 "nvme_io_md": false, 00:25:00.209 "write_zeroes": true, 00:25:00.209 "zcopy": true, 00:25:00.209 "get_zone_info": false, 00:25:00.209 "zone_management": false, 00:25:00.209 "zone_append": false, 00:25:00.209 "compare": false, 00:25:00.209 "compare_and_write": false, 00:25:00.209 "abort": true, 00:25:00.209 "seek_hole": false, 00:25:00.209 "seek_data": false, 00:25:00.209 "copy": true, 00:25:00.209 "nvme_iov_md": false 00:25:00.209 }, 00:25:00.209 "memory_domains": [ 00:25:00.209 { 00:25:00.209 "dma_device_id": "system", 00:25:00.209 "dma_device_type": 1 00:25:00.209 }, 00:25:00.209 { 00:25:00.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.209 "dma_device_type": 2 00:25:00.210 } 00:25:00.210 ], 00:25:00.210 "driver_specific": {} 00:25:00.210 }' 00:25:00.210 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:00.210 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:00.210 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:00.210 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.210 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.210 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:00.210 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:00.469 02:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:00.728 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:00.728 "name": "BaseBdev3", 00:25:00.728 "aliases": [ 00:25:00.728 "37a838bf-972a-4b4f-833c-298427e2021e" 00:25:00.728 ], 00:25:00.728 "product_name": "Malloc disk", 00:25:00.728 "block_size": 512, 00:25:00.728 "num_blocks": 65536, 00:25:00.728 "uuid": "37a838bf-972a-4b4f-833c-298427e2021e", 00:25:00.728 "assigned_rate_limits": { 00:25:00.728 "rw_ios_per_sec": 0, 00:25:00.728 "rw_mbytes_per_sec": 0, 00:25:00.728 "r_mbytes_per_sec": 0, 00:25:00.728 "w_mbytes_per_sec": 0 00:25:00.728 }, 00:25:00.728 "claimed": true, 00:25:00.728 "claim_type": "exclusive_write", 00:25:00.728 "zoned": false, 00:25:00.728 "supported_io_types": { 00:25:00.728 "read": true, 00:25:00.728 "write": true, 00:25:00.728 "unmap": true, 00:25:00.728 "flush": true, 00:25:00.728 "reset": true, 00:25:00.728 "nvme_admin": false, 00:25:00.728 "nvme_io": false, 00:25:00.728 "nvme_io_md": false, 00:25:00.728 "write_zeroes": true, 00:25:00.728 "zcopy": true, 00:25:00.728 "get_zone_info": false, 00:25:00.728 "zone_management": false, 00:25:00.728 "zone_append": false, 00:25:00.728 "compare": false, 00:25:00.728 "compare_and_write": false, 00:25:00.728 "abort": true, 00:25:00.728 "seek_hole": false, 00:25:00.728 "seek_data": false, 00:25:00.728 "copy": true, 00:25:00.728 "nvme_iov_md": false 00:25:00.728 }, 00:25:00.728 "memory_domains": [ 00:25:00.728 { 00:25:00.728 "dma_device_id": "system", 00:25:00.728 "dma_device_type": 1 00:25:00.728 }, 00:25:00.728 { 00:25:00.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.728 "dma_device_type": 2 00:25:00.728 } 00:25:00.728 ], 00:25:00.728 "driver_specific": {} 00:25:00.728 }' 00:25:00.728 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:00.728 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:00.728 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:00.728 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:00.987 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:01.247 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:01.247 "name": "BaseBdev4", 00:25:01.247 "aliases": [ 00:25:01.247 "3a6ee363-d928-4d76-914c-4256b3fe50c0" 00:25:01.247 ], 00:25:01.247 "product_name": "Malloc disk", 00:25:01.247 "block_size": 512, 00:25:01.247 "num_blocks": 65536, 00:25:01.247 "uuid": "3a6ee363-d928-4d76-914c-4256b3fe50c0", 00:25:01.247 "assigned_rate_limits": { 00:25:01.247 "rw_ios_per_sec": 0, 00:25:01.247 "rw_mbytes_per_sec": 0, 00:25:01.247 "r_mbytes_per_sec": 0, 00:25:01.247 "w_mbytes_per_sec": 0 00:25:01.247 }, 00:25:01.247 "claimed": true, 00:25:01.247 "claim_type": "exclusive_write", 00:25:01.247 "zoned": false, 00:25:01.247 "supported_io_types": { 00:25:01.247 "read": true, 00:25:01.247 "write": true, 00:25:01.247 "unmap": true, 00:25:01.247 "flush": true, 00:25:01.247 "reset": true, 00:25:01.247 "nvme_admin": false, 00:25:01.247 "nvme_io": false, 00:25:01.247 "nvme_io_md": false, 00:25:01.247 "write_zeroes": true, 00:25:01.247 "zcopy": true, 00:25:01.247 "get_zone_info": false, 00:25:01.247 "zone_management": false, 00:25:01.247 "zone_append": false, 00:25:01.247 "compare": false, 00:25:01.247 "compare_and_write": false, 00:25:01.247 "abort": true, 00:25:01.247 "seek_hole": false, 00:25:01.247 "seek_data": false, 00:25:01.247 "copy": true, 00:25:01.247 "nvme_iov_md": false 00:25:01.247 }, 00:25:01.247 "memory_domains": [ 00:25:01.247 { 00:25:01.247 "dma_device_id": "system", 00:25:01.247 "dma_device_type": 1 00:25:01.247 }, 00:25:01.247 { 00:25:01.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.247 "dma_device_type": 2 00:25:01.247 } 00:25:01.247 ], 00:25:01.247 "driver_specific": {} 00:25:01.247 }' 00:25:01.247 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:01.508 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:01.767 02:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:01.767 02:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:01.767 02:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:01.767 [2024-07-11 02:30:52.175565] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:01.767 [2024-07-11 02:30:52.175590] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:01.767 [2024-07-11 02:30:52.175642] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.767 [2024-07-11 02:30:52.175906] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.767 [2024-07-11 02:30:52.175927] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17942f0 name Existed_Raid, state offline 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1989028 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1989028 ']' 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1989028 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1989028 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1989028' 00:25:02.027 killing process with pid 1989028 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1989028 00:25:02.027 [2024-07-11 02:30:52.250441] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:02.027 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1989028 00:25:02.027 [2024-07-11 02:30:52.287109] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:02.288 02:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:02.288 00:25:02.288 real 0m34.597s 00:25:02.288 user 1m3.844s 00:25:02.288 sys 0m6.240s 00:25:02.288 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.288 02:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.288 ************************************ 00:25:02.288 END TEST raid_state_function_test 00:25:02.288 ************************************ 00:25:02.288 02:30:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:02.288 02:30:52 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:02.288 02:30:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:02.288 02:30:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.288 02:30:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:02.288 ************************************ 00:25:02.288 START TEST raid_state_function_test_sb 00:25:02.288 ************************************ 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1994087 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1994087' 00:25:02.289 Process raid pid: 1994087 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1994087 /var/tmp/spdk-raid.sock 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1994087 ']' 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:02.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.289 02:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.289 [2024-07-11 02:30:52.655542] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:25:02.289 [2024-07-11 02:30:52.655613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.550 [2024-07-11 02:30:52.794238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.550 [2024-07-11 02:30:52.846680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.550 [2024-07-11 02:30:52.905910] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.550 [2024-07-11 02:30:52.905940] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.118 02:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.118 02:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:03.118 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:03.377 [2024-07-11 02:30:53.683311] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.377 [2024-07-11 02:30:53.683354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.377 [2024-07-11 02:30:53.683365] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.377 [2024-07-11 02:30:53.683376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.377 [2024-07-11 02:30:53.683385] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.377 [2024-07-11 02:30:53.683396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.377 [2024-07-11 02:30:53.683405] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:03.377 [2024-07-11 02:30:53.683416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.377 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.635 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.635 "name": "Existed_Raid", 00:25:03.635 "uuid": "547e56d0-d08f-4ab7-982f-49a8d979f088", 00:25:03.635 "strip_size_kb": 0, 00:25:03.635 "state": "configuring", 00:25:03.635 "raid_level": "raid1", 00:25:03.635 "superblock": true, 00:25:03.635 "num_base_bdevs": 4, 00:25:03.635 "num_base_bdevs_discovered": 0, 00:25:03.635 "num_base_bdevs_operational": 4, 00:25:03.635 "base_bdevs_list": [ 00:25:03.635 { 00:25:03.635 "name": "BaseBdev1", 00:25:03.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.635 "is_configured": false, 00:25:03.635 "data_offset": 0, 00:25:03.635 "data_size": 0 00:25:03.635 }, 00:25:03.635 { 00:25:03.635 "name": "BaseBdev2", 00:25:03.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.635 "is_configured": false, 00:25:03.635 "data_offset": 0, 00:25:03.635 "data_size": 0 00:25:03.635 }, 00:25:03.635 { 00:25:03.635 "name": "BaseBdev3", 00:25:03.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.635 "is_configured": false, 00:25:03.635 "data_offset": 0, 00:25:03.635 "data_size": 0 00:25:03.635 }, 00:25:03.635 { 00:25:03.635 "name": "BaseBdev4", 00:25:03.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.635 "is_configured": false, 00:25:03.635 "data_offset": 0, 00:25:03.635 "data_size": 0 00:25:03.635 } 00:25:03.635 ] 00:25:03.635 }' 00:25:03.635 02:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.635 02:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.203 02:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:04.462 [2024-07-11 02:30:54.657792] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:04.462 [2024-07-11 02:30:54.657828] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc9a730 name Existed_Raid, state configuring 00:25:04.462 02:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:04.721 [2024-07-11 02:30:54.906448] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:04.721 [2024-07-11 02:30:54.906476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:04.721 [2024-07-11 02:30:54.906485] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:04.721 [2024-07-11 02:30:54.906497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:04.721 [2024-07-11 02:30:54.906505] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:04.721 [2024-07-11 02:30:54.906516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:04.721 [2024-07-11 02:30:54.906524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:04.721 [2024-07-11 02:30:54.906539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:04.721 02:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:04.721 [2024-07-11 02:30:55.092631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:04.721 BaseBdev1 00:25:04.721 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:04.721 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:04.721 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:04.721 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:04.721 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:04.721 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:04.721 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:04.980 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:05.240 [ 00:25:05.240 { 00:25:05.240 "name": "BaseBdev1", 00:25:05.240 "aliases": [ 00:25:05.240 "fac34b4b-0964-43f2-b91f-d20e7c767c72" 00:25:05.240 ], 00:25:05.240 "product_name": "Malloc disk", 00:25:05.240 "block_size": 512, 00:25:05.240 "num_blocks": 65536, 00:25:05.240 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:05.240 "assigned_rate_limits": { 00:25:05.240 "rw_ios_per_sec": 0, 00:25:05.240 "rw_mbytes_per_sec": 0, 00:25:05.240 "r_mbytes_per_sec": 0, 00:25:05.240 "w_mbytes_per_sec": 0 00:25:05.240 }, 00:25:05.240 "claimed": true, 00:25:05.240 "claim_type": "exclusive_write", 00:25:05.240 "zoned": false, 00:25:05.240 "supported_io_types": { 00:25:05.240 "read": true, 00:25:05.240 "write": true, 00:25:05.240 "unmap": true, 00:25:05.240 "flush": true, 00:25:05.240 "reset": true, 00:25:05.240 "nvme_admin": false, 00:25:05.240 "nvme_io": false, 00:25:05.240 "nvme_io_md": false, 00:25:05.240 "write_zeroes": true, 00:25:05.240 "zcopy": true, 00:25:05.240 "get_zone_info": false, 00:25:05.240 "zone_management": false, 00:25:05.240 "zone_append": false, 00:25:05.240 "compare": false, 00:25:05.240 "compare_and_write": false, 00:25:05.240 "abort": true, 00:25:05.240 "seek_hole": false, 00:25:05.240 "seek_data": false, 00:25:05.240 "copy": true, 00:25:05.240 "nvme_iov_md": false 00:25:05.240 }, 00:25:05.240 "memory_domains": [ 00:25:05.240 { 00:25:05.240 "dma_device_id": "system", 00:25:05.240 "dma_device_type": 1 00:25:05.240 }, 00:25:05.240 { 00:25:05.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.240 "dma_device_type": 2 00:25:05.240 } 00:25:05.240 ], 00:25:05.240 "driver_specific": {} 00:25:05.240 } 00:25:05.240 ] 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.240 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.499 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.499 "name": "Existed_Raid", 00:25:05.499 "uuid": "e10157a6-957a-440d-868b-6d568715715c", 00:25:05.499 "strip_size_kb": 0, 00:25:05.499 "state": "configuring", 00:25:05.499 "raid_level": "raid1", 00:25:05.499 "superblock": true, 00:25:05.499 "num_base_bdevs": 4, 00:25:05.499 "num_base_bdevs_discovered": 1, 00:25:05.499 "num_base_bdevs_operational": 4, 00:25:05.499 "base_bdevs_list": [ 00:25:05.499 { 00:25:05.499 "name": "BaseBdev1", 00:25:05.499 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:05.499 "is_configured": true, 00:25:05.499 "data_offset": 2048, 00:25:05.499 "data_size": 63488 00:25:05.499 }, 00:25:05.499 { 00:25:05.499 "name": "BaseBdev2", 00:25:05.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.499 "is_configured": false, 00:25:05.499 "data_offset": 0, 00:25:05.499 "data_size": 0 00:25:05.499 }, 00:25:05.499 { 00:25:05.499 "name": "BaseBdev3", 00:25:05.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.499 "is_configured": false, 00:25:05.499 "data_offset": 0, 00:25:05.499 "data_size": 0 00:25:05.499 }, 00:25:05.499 { 00:25:05.499 "name": "BaseBdev4", 00:25:05.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.499 "is_configured": false, 00:25:05.499 "data_offset": 0, 00:25:05.499 "data_size": 0 00:25:05.499 } 00:25:05.499 ] 00:25:05.499 }' 00:25:05.499 02:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.499 02:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.066 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:06.325 [2024-07-11 02:30:56.608645] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:06.325 [2024-07-11 02:30:56.608686] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc9a060 name Existed_Raid, state configuring 00:25:06.325 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:06.584 [2024-07-11 02:30:56.793189] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.584 [2024-07-11 02:30:56.794591] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.584 [2024-07-11 02:30:56.794624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.584 [2024-07-11 02:30:56.794634] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:06.584 [2024-07-11 02:30:56.794645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:06.584 [2024-07-11 02:30:56.794655] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:06.584 [2024-07-11 02:30:56.794666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.584 02:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.843 02:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.843 "name": "Existed_Raid", 00:25:06.843 "uuid": "f86bee4a-fbb0-4c4e-bca2-46856f674a01", 00:25:06.843 "strip_size_kb": 0, 00:25:06.843 "state": "configuring", 00:25:06.843 "raid_level": "raid1", 00:25:06.843 "superblock": true, 00:25:06.843 "num_base_bdevs": 4, 00:25:06.843 "num_base_bdevs_discovered": 1, 00:25:06.843 "num_base_bdevs_operational": 4, 00:25:06.843 "base_bdevs_list": [ 00:25:06.843 { 00:25:06.843 "name": "BaseBdev1", 00:25:06.843 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:06.843 "is_configured": true, 00:25:06.843 "data_offset": 2048, 00:25:06.843 "data_size": 63488 00:25:06.843 }, 00:25:06.843 { 00:25:06.843 "name": "BaseBdev2", 00:25:06.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.843 "is_configured": false, 00:25:06.843 "data_offset": 0, 00:25:06.843 "data_size": 0 00:25:06.843 }, 00:25:06.843 { 00:25:06.843 "name": "BaseBdev3", 00:25:06.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.843 "is_configured": false, 00:25:06.843 "data_offset": 0, 00:25:06.843 "data_size": 0 00:25:06.843 }, 00:25:06.843 { 00:25:06.843 "name": "BaseBdev4", 00:25:06.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.843 "is_configured": false, 00:25:06.843 "data_offset": 0, 00:25:06.843 "data_size": 0 00:25:06.843 } 00:25:06.843 ] 00:25:06.843 }' 00:25:06.843 02:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.843 02:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.410 02:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:07.669 [2024-07-11 02:30:57.839232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.669 BaseBdev2 00:25:07.669 02:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:07.669 02:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:07.669 02:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:07.669 02:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:07.669 02:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:07.669 02:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:07.669 02:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:07.669 02:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:07.928 [ 00:25:07.928 { 00:25:07.928 "name": "BaseBdev2", 00:25:07.928 "aliases": [ 00:25:07.928 "b7c83b33-909a-44af-88ba-afe0da1ddcb3" 00:25:07.928 ], 00:25:07.928 "product_name": "Malloc disk", 00:25:07.928 "block_size": 512, 00:25:07.928 "num_blocks": 65536, 00:25:07.928 "uuid": "b7c83b33-909a-44af-88ba-afe0da1ddcb3", 00:25:07.928 "assigned_rate_limits": { 00:25:07.928 "rw_ios_per_sec": 0, 00:25:07.928 "rw_mbytes_per_sec": 0, 00:25:07.928 "r_mbytes_per_sec": 0, 00:25:07.928 "w_mbytes_per_sec": 0 00:25:07.928 }, 00:25:07.928 "claimed": true, 00:25:07.928 "claim_type": "exclusive_write", 00:25:07.928 "zoned": false, 00:25:07.928 "supported_io_types": { 00:25:07.928 "read": true, 00:25:07.928 "write": true, 00:25:07.928 "unmap": true, 00:25:07.928 "flush": true, 00:25:07.928 "reset": true, 00:25:07.928 "nvme_admin": false, 00:25:07.928 "nvme_io": false, 00:25:07.928 "nvme_io_md": false, 00:25:07.928 "write_zeroes": true, 00:25:07.928 "zcopy": true, 00:25:07.928 "get_zone_info": false, 00:25:07.928 "zone_management": false, 00:25:07.928 "zone_append": false, 00:25:07.928 "compare": false, 00:25:07.928 "compare_and_write": false, 00:25:07.928 "abort": true, 00:25:07.928 "seek_hole": false, 00:25:07.928 "seek_data": false, 00:25:07.928 "copy": true, 00:25:07.928 "nvme_iov_md": false 00:25:07.928 }, 00:25:07.928 "memory_domains": [ 00:25:07.928 { 00:25:07.928 "dma_device_id": "system", 00:25:07.928 "dma_device_type": 1 00:25:07.928 }, 00:25:07.928 { 00:25:07.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.928 "dma_device_type": 2 00:25:07.928 } 00:25:07.928 ], 00:25:07.928 "driver_specific": {} 00:25:07.928 } 00:25:07.928 ] 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.928 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.187 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.187 "name": "Existed_Raid", 00:25:08.187 "uuid": "f86bee4a-fbb0-4c4e-bca2-46856f674a01", 00:25:08.187 "strip_size_kb": 0, 00:25:08.187 "state": "configuring", 00:25:08.187 "raid_level": "raid1", 00:25:08.187 "superblock": true, 00:25:08.187 "num_base_bdevs": 4, 00:25:08.187 "num_base_bdevs_discovered": 2, 00:25:08.187 "num_base_bdevs_operational": 4, 00:25:08.187 "base_bdevs_list": [ 00:25:08.187 { 00:25:08.187 "name": "BaseBdev1", 00:25:08.187 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:08.187 "is_configured": true, 00:25:08.187 "data_offset": 2048, 00:25:08.187 "data_size": 63488 00:25:08.187 }, 00:25:08.187 { 00:25:08.187 "name": "BaseBdev2", 00:25:08.187 "uuid": "b7c83b33-909a-44af-88ba-afe0da1ddcb3", 00:25:08.187 "is_configured": true, 00:25:08.187 "data_offset": 2048, 00:25:08.187 "data_size": 63488 00:25:08.187 }, 00:25:08.187 { 00:25:08.187 "name": "BaseBdev3", 00:25:08.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.187 "is_configured": false, 00:25:08.187 "data_offset": 0, 00:25:08.187 "data_size": 0 00:25:08.187 }, 00:25:08.187 { 00:25:08.187 "name": "BaseBdev4", 00:25:08.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.187 "is_configured": false, 00:25:08.187 "data_offset": 0, 00:25:08.187 "data_size": 0 00:25:08.187 } 00:25:08.187 ] 00:25:08.187 }' 00:25:08.187 02:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.187 02:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.755 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:09.013 [2024-07-11 02:30:59.290389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:09.013 BaseBdev3 00:25:09.013 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:09.013 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:09.013 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:09.013 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:09.013 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:09.013 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:09.013 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.272 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:09.531 [ 00:25:09.531 { 00:25:09.531 "name": "BaseBdev3", 00:25:09.531 "aliases": [ 00:25:09.531 "f45d5810-25aa-49d8-8e2c-0f78040764c0" 00:25:09.531 ], 00:25:09.531 "product_name": "Malloc disk", 00:25:09.531 "block_size": 512, 00:25:09.531 "num_blocks": 65536, 00:25:09.531 "uuid": "f45d5810-25aa-49d8-8e2c-0f78040764c0", 00:25:09.531 "assigned_rate_limits": { 00:25:09.531 "rw_ios_per_sec": 0, 00:25:09.531 "rw_mbytes_per_sec": 0, 00:25:09.531 "r_mbytes_per_sec": 0, 00:25:09.531 "w_mbytes_per_sec": 0 00:25:09.531 }, 00:25:09.531 "claimed": true, 00:25:09.531 "claim_type": "exclusive_write", 00:25:09.531 "zoned": false, 00:25:09.531 "supported_io_types": { 00:25:09.531 "read": true, 00:25:09.531 "write": true, 00:25:09.531 "unmap": true, 00:25:09.531 "flush": true, 00:25:09.531 "reset": true, 00:25:09.531 "nvme_admin": false, 00:25:09.531 "nvme_io": false, 00:25:09.531 "nvme_io_md": false, 00:25:09.531 "write_zeroes": true, 00:25:09.531 "zcopy": true, 00:25:09.531 "get_zone_info": false, 00:25:09.531 "zone_management": false, 00:25:09.531 "zone_append": false, 00:25:09.531 "compare": false, 00:25:09.531 "compare_and_write": false, 00:25:09.531 "abort": true, 00:25:09.531 "seek_hole": false, 00:25:09.531 "seek_data": false, 00:25:09.531 "copy": true, 00:25:09.531 "nvme_iov_md": false 00:25:09.531 }, 00:25:09.531 "memory_domains": [ 00:25:09.531 { 00:25:09.531 "dma_device_id": "system", 00:25:09.531 "dma_device_type": 1 00:25:09.531 }, 00:25:09.531 { 00:25:09.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.531 "dma_device_type": 2 00:25:09.531 } 00:25:09.531 ], 00:25:09.531 "driver_specific": {} 00:25:09.532 } 00:25:09.532 ] 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.532 02:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.791 02:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.791 "name": "Existed_Raid", 00:25:09.791 "uuid": "f86bee4a-fbb0-4c4e-bca2-46856f674a01", 00:25:09.791 "strip_size_kb": 0, 00:25:09.791 "state": "configuring", 00:25:09.791 "raid_level": "raid1", 00:25:09.791 "superblock": true, 00:25:09.791 "num_base_bdevs": 4, 00:25:09.791 "num_base_bdevs_discovered": 3, 00:25:09.791 "num_base_bdevs_operational": 4, 00:25:09.791 "base_bdevs_list": [ 00:25:09.791 { 00:25:09.791 "name": "BaseBdev1", 00:25:09.791 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:09.791 "is_configured": true, 00:25:09.791 "data_offset": 2048, 00:25:09.791 "data_size": 63488 00:25:09.791 }, 00:25:09.791 { 00:25:09.791 "name": "BaseBdev2", 00:25:09.791 "uuid": "b7c83b33-909a-44af-88ba-afe0da1ddcb3", 00:25:09.791 "is_configured": true, 00:25:09.791 "data_offset": 2048, 00:25:09.791 "data_size": 63488 00:25:09.791 }, 00:25:09.791 { 00:25:09.791 "name": "BaseBdev3", 00:25:09.791 "uuid": "f45d5810-25aa-49d8-8e2c-0f78040764c0", 00:25:09.791 "is_configured": true, 00:25:09.791 "data_offset": 2048, 00:25:09.791 "data_size": 63488 00:25:09.791 }, 00:25:09.791 { 00:25:09.791 "name": "BaseBdev4", 00:25:09.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.791 "is_configured": false, 00:25:09.791 "data_offset": 0, 00:25:09.791 "data_size": 0 00:25:09.791 } 00:25:09.791 ] 00:25:09.791 }' 00:25:09.791 02:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.791 02:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.359 02:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:10.618 [2024-07-11 02:31:00.897999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:10.618 [2024-07-11 02:31:00.898167] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe4cf90 00:25:10.618 [2024-07-11 02:31:00.898181] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:10.618 [2024-07-11 02:31:00.898366] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc9f2c0 00:25:10.618 [2024-07-11 02:31:00.898493] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe4cf90 00:25:10.618 [2024-07-11 02:31:00.898503] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe4cf90 00:25:10.618 [2024-07-11 02:31:00.898598] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.618 BaseBdev4 00:25:10.618 02:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:10.618 02:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:10.618 02:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:10.618 02:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:10.618 02:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:10.618 02:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:10.618 02:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:10.876 02:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:11.135 [ 00:25:11.135 { 00:25:11.135 "name": "BaseBdev4", 00:25:11.135 "aliases": [ 00:25:11.135 "c2f6d915-07f6-4ce7-b5fc-4ee5b261e4ce" 00:25:11.135 ], 00:25:11.135 "product_name": "Malloc disk", 00:25:11.135 "block_size": 512, 00:25:11.135 "num_blocks": 65536, 00:25:11.135 "uuid": "c2f6d915-07f6-4ce7-b5fc-4ee5b261e4ce", 00:25:11.135 "assigned_rate_limits": { 00:25:11.135 "rw_ios_per_sec": 0, 00:25:11.135 "rw_mbytes_per_sec": 0, 00:25:11.135 "r_mbytes_per_sec": 0, 00:25:11.135 "w_mbytes_per_sec": 0 00:25:11.135 }, 00:25:11.135 "claimed": true, 00:25:11.135 "claim_type": "exclusive_write", 00:25:11.135 "zoned": false, 00:25:11.135 "supported_io_types": { 00:25:11.135 "read": true, 00:25:11.135 "write": true, 00:25:11.135 "unmap": true, 00:25:11.135 "flush": true, 00:25:11.135 "reset": true, 00:25:11.135 "nvme_admin": false, 00:25:11.135 "nvme_io": false, 00:25:11.135 "nvme_io_md": false, 00:25:11.135 "write_zeroes": true, 00:25:11.135 "zcopy": true, 00:25:11.135 "get_zone_info": false, 00:25:11.135 "zone_management": false, 00:25:11.135 "zone_append": false, 00:25:11.135 "compare": false, 00:25:11.135 "compare_and_write": false, 00:25:11.135 "abort": true, 00:25:11.135 "seek_hole": false, 00:25:11.135 "seek_data": false, 00:25:11.135 "copy": true, 00:25:11.135 "nvme_iov_md": false 00:25:11.135 }, 00:25:11.135 "memory_domains": [ 00:25:11.135 { 00:25:11.135 "dma_device_id": "system", 00:25:11.135 "dma_device_type": 1 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.135 "dma_device_type": 2 00:25:11.135 } 00:25:11.135 ], 00:25:11.135 "driver_specific": {} 00:25:11.135 } 00:25:11.135 ] 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.135 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.135 "name": "Existed_Raid", 00:25:11.135 "uuid": "f86bee4a-fbb0-4c4e-bca2-46856f674a01", 00:25:11.135 "strip_size_kb": 0, 00:25:11.135 "state": "online", 00:25:11.136 "raid_level": "raid1", 00:25:11.136 "superblock": true, 00:25:11.136 "num_base_bdevs": 4, 00:25:11.136 "num_base_bdevs_discovered": 4, 00:25:11.136 "num_base_bdevs_operational": 4, 00:25:11.136 "base_bdevs_list": [ 00:25:11.136 { 00:25:11.136 "name": "BaseBdev1", 00:25:11.136 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:11.136 "is_configured": true, 00:25:11.136 "data_offset": 2048, 00:25:11.136 "data_size": 63488 00:25:11.136 }, 00:25:11.136 { 00:25:11.136 "name": "BaseBdev2", 00:25:11.136 "uuid": "b7c83b33-909a-44af-88ba-afe0da1ddcb3", 00:25:11.136 "is_configured": true, 00:25:11.136 "data_offset": 2048, 00:25:11.136 "data_size": 63488 00:25:11.136 }, 00:25:11.136 { 00:25:11.136 "name": "BaseBdev3", 00:25:11.136 "uuid": "f45d5810-25aa-49d8-8e2c-0f78040764c0", 00:25:11.136 "is_configured": true, 00:25:11.136 "data_offset": 2048, 00:25:11.136 "data_size": 63488 00:25:11.136 }, 00:25:11.136 { 00:25:11.136 "name": "BaseBdev4", 00:25:11.136 "uuid": "c2f6d915-07f6-4ce7-b5fc-4ee5b261e4ce", 00:25:11.136 "is_configured": true, 00:25:11.136 "data_offset": 2048, 00:25:11.136 "data_size": 63488 00:25:11.136 } 00:25:11.136 ] 00:25:11.136 }' 00:25:11.136 02:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.136 02:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.702 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:11.702 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:11.702 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:11.702 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:11.702 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:11.702 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:11.961 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:11.961 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:11.961 [2024-07-11 02:31:02.350195] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.961 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:11.961 "name": "Existed_Raid", 00:25:11.961 "aliases": [ 00:25:11.961 "f86bee4a-fbb0-4c4e-bca2-46856f674a01" 00:25:11.961 ], 00:25:11.961 "product_name": "Raid Volume", 00:25:11.961 "block_size": 512, 00:25:11.961 "num_blocks": 63488, 00:25:11.961 "uuid": "f86bee4a-fbb0-4c4e-bca2-46856f674a01", 00:25:11.961 "assigned_rate_limits": { 00:25:11.961 "rw_ios_per_sec": 0, 00:25:11.961 "rw_mbytes_per_sec": 0, 00:25:11.961 "r_mbytes_per_sec": 0, 00:25:11.961 "w_mbytes_per_sec": 0 00:25:11.961 }, 00:25:11.961 "claimed": false, 00:25:11.961 "zoned": false, 00:25:11.961 "supported_io_types": { 00:25:11.961 "read": true, 00:25:11.961 "write": true, 00:25:11.961 "unmap": false, 00:25:11.961 "flush": false, 00:25:11.961 "reset": true, 00:25:11.961 "nvme_admin": false, 00:25:11.961 "nvme_io": false, 00:25:11.961 "nvme_io_md": false, 00:25:11.961 "write_zeroes": true, 00:25:11.961 "zcopy": false, 00:25:11.961 "get_zone_info": false, 00:25:11.961 "zone_management": false, 00:25:11.961 "zone_append": false, 00:25:11.961 "compare": false, 00:25:11.961 "compare_and_write": false, 00:25:11.961 "abort": false, 00:25:11.961 "seek_hole": false, 00:25:11.961 "seek_data": false, 00:25:11.961 "copy": false, 00:25:11.961 "nvme_iov_md": false 00:25:11.961 }, 00:25:11.961 "memory_domains": [ 00:25:11.961 { 00:25:11.961 "dma_device_id": "system", 00:25:11.961 "dma_device_type": 1 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.961 "dma_device_type": 2 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "dma_device_id": "system", 00:25:11.961 "dma_device_type": 1 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.961 "dma_device_type": 2 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "dma_device_id": "system", 00:25:11.961 "dma_device_type": 1 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.961 "dma_device_type": 2 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "dma_device_id": "system", 00:25:11.961 "dma_device_type": 1 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.961 "dma_device_type": 2 00:25:11.961 } 00:25:11.961 ], 00:25:11.961 "driver_specific": { 00:25:11.961 "raid": { 00:25:11.961 "uuid": "f86bee4a-fbb0-4c4e-bca2-46856f674a01", 00:25:11.961 "strip_size_kb": 0, 00:25:11.961 "state": "online", 00:25:11.961 "raid_level": "raid1", 00:25:11.961 "superblock": true, 00:25:11.961 "num_base_bdevs": 4, 00:25:11.961 "num_base_bdevs_discovered": 4, 00:25:11.961 "num_base_bdevs_operational": 4, 00:25:11.961 "base_bdevs_list": [ 00:25:11.961 { 00:25:11.961 "name": "BaseBdev1", 00:25:11.961 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:11.961 "is_configured": true, 00:25:11.961 "data_offset": 2048, 00:25:11.961 "data_size": 63488 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "name": "BaseBdev2", 00:25:11.961 "uuid": "b7c83b33-909a-44af-88ba-afe0da1ddcb3", 00:25:11.961 "is_configured": true, 00:25:11.961 "data_offset": 2048, 00:25:11.961 "data_size": 63488 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "name": "BaseBdev3", 00:25:11.961 "uuid": "f45d5810-25aa-49d8-8e2c-0f78040764c0", 00:25:11.961 "is_configured": true, 00:25:11.961 "data_offset": 2048, 00:25:11.961 "data_size": 63488 00:25:11.961 }, 00:25:11.961 { 00:25:11.961 "name": "BaseBdev4", 00:25:11.961 "uuid": "c2f6d915-07f6-4ce7-b5fc-4ee5b261e4ce", 00:25:11.961 "is_configured": true, 00:25:11.961 "data_offset": 2048, 00:25:11.961 "data_size": 63488 00:25:11.961 } 00:25:11.961 ] 00:25:11.961 } 00:25:11.961 } 00:25:11.961 }' 00:25:11.961 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:12.220 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:12.220 BaseBdev2 00:25:12.220 BaseBdev3 00:25:12.220 BaseBdev4' 00:25:12.220 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.220 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:12.220 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.479 "name": "BaseBdev1", 00:25:12.479 "aliases": [ 00:25:12.479 "fac34b4b-0964-43f2-b91f-d20e7c767c72" 00:25:12.479 ], 00:25:12.479 "product_name": "Malloc disk", 00:25:12.479 "block_size": 512, 00:25:12.479 "num_blocks": 65536, 00:25:12.479 "uuid": "fac34b4b-0964-43f2-b91f-d20e7c767c72", 00:25:12.479 "assigned_rate_limits": { 00:25:12.479 "rw_ios_per_sec": 0, 00:25:12.479 "rw_mbytes_per_sec": 0, 00:25:12.479 "r_mbytes_per_sec": 0, 00:25:12.479 "w_mbytes_per_sec": 0 00:25:12.479 }, 00:25:12.479 "claimed": true, 00:25:12.479 "claim_type": "exclusive_write", 00:25:12.479 "zoned": false, 00:25:12.479 "supported_io_types": { 00:25:12.479 "read": true, 00:25:12.479 "write": true, 00:25:12.479 "unmap": true, 00:25:12.479 "flush": true, 00:25:12.479 "reset": true, 00:25:12.479 "nvme_admin": false, 00:25:12.479 "nvme_io": false, 00:25:12.479 "nvme_io_md": false, 00:25:12.479 "write_zeroes": true, 00:25:12.479 "zcopy": true, 00:25:12.479 "get_zone_info": false, 00:25:12.479 "zone_management": false, 00:25:12.479 "zone_append": false, 00:25:12.479 "compare": false, 00:25:12.479 "compare_and_write": false, 00:25:12.479 "abort": true, 00:25:12.479 "seek_hole": false, 00:25:12.479 "seek_data": false, 00:25:12.479 "copy": true, 00:25:12.479 "nvme_iov_md": false 00:25:12.479 }, 00:25:12.479 "memory_domains": [ 00:25:12.479 { 00:25:12.479 "dma_device_id": "system", 00:25:12.479 "dma_device_type": 1 00:25:12.479 }, 00:25:12.479 { 00:25:12.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.479 "dma_device_type": 2 00:25:12.479 } 00:25:12.479 ], 00:25:12.479 "driver_specific": {} 00:25:12.479 }' 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.479 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.738 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.738 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.738 02:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.738 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.738 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.738 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:12.738 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.998 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.998 "name": "BaseBdev2", 00:25:12.998 "aliases": [ 00:25:12.998 "b7c83b33-909a-44af-88ba-afe0da1ddcb3" 00:25:12.998 ], 00:25:12.998 "product_name": "Malloc disk", 00:25:12.998 "block_size": 512, 00:25:12.998 "num_blocks": 65536, 00:25:12.998 "uuid": "b7c83b33-909a-44af-88ba-afe0da1ddcb3", 00:25:12.998 "assigned_rate_limits": { 00:25:12.998 "rw_ios_per_sec": 0, 00:25:12.998 "rw_mbytes_per_sec": 0, 00:25:12.998 "r_mbytes_per_sec": 0, 00:25:12.998 "w_mbytes_per_sec": 0 00:25:12.998 }, 00:25:12.998 "claimed": true, 00:25:12.998 "claim_type": "exclusive_write", 00:25:12.998 "zoned": false, 00:25:12.998 "supported_io_types": { 00:25:12.998 "read": true, 00:25:12.998 "write": true, 00:25:12.998 "unmap": true, 00:25:12.998 "flush": true, 00:25:12.998 "reset": true, 00:25:12.998 "nvme_admin": false, 00:25:12.998 "nvme_io": false, 00:25:12.998 "nvme_io_md": false, 00:25:12.998 "write_zeroes": true, 00:25:12.998 "zcopy": true, 00:25:12.998 "get_zone_info": false, 00:25:12.998 "zone_management": false, 00:25:12.998 "zone_append": false, 00:25:12.998 "compare": false, 00:25:12.998 "compare_and_write": false, 00:25:12.998 "abort": true, 00:25:12.998 "seek_hole": false, 00:25:12.998 "seek_data": false, 00:25:12.998 "copy": true, 00:25:12.998 "nvme_iov_md": false 00:25:12.998 }, 00:25:12.998 "memory_domains": [ 00:25:12.998 { 00:25:12.998 "dma_device_id": "system", 00:25:12.998 "dma_device_type": 1 00:25:12.998 }, 00:25:12.998 { 00:25:12.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.998 "dma_device_type": 2 00:25:12.998 } 00:25:12.998 ], 00:25:12.998 "driver_specific": {} 00:25:12.998 }' 00:25:12.998 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.998 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.998 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.998 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.998 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:13.258 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.572 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.572 "name": "BaseBdev3", 00:25:13.572 "aliases": [ 00:25:13.572 "f45d5810-25aa-49d8-8e2c-0f78040764c0" 00:25:13.572 ], 00:25:13.572 "product_name": "Malloc disk", 00:25:13.572 "block_size": 512, 00:25:13.572 "num_blocks": 65536, 00:25:13.572 "uuid": "f45d5810-25aa-49d8-8e2c-0f78040764c0", 00:25:13.572 "assigned_rate_limits": { 00:25:13.572 "rw_ios_per_sec": 0, 00:25:13.572 "rw_mbytes_per_sec": 0, 00:25:13.572 "r_mbytes_per_sec": 0, 00:25:13.572 "w_mbytes_per_sec": 0 00:25:13.572 }, 00:25:13.572 "claimed": true, 00:25:13.572 "claim_type": "exclusive_write", 00:25:13.572 "zoned": false, 00:25:13.572 "supported_io_types": { 00:25:13.572 "read": true, 00:25:13.572 "write": true, 00:25:13.572 "unmap": true, 00:25:13.572 "flush": true, 00:25:13.572 "reset": true, 00:25:13.572 "nvme_admin": false, 00:25:13.572 "nvme_io": false, 00:25:13.572 "nvme_io_md": false, 00:25:13.572 "write_zeroes": true, 00:25:13.572 "zcopy": true, 00:25:13.572 "get_zone_info": false, 00:25:13.572 "zone_management": false, 00:25:13.572 "zone_append": false, 00:25:13.572 "compare": false, 00:25:13.572 "compare_and_write": false, 00:25:13.572 "abort": true, 00:25:13.572 "seek_hole": false, 00:25:13.572 "seek_data": false, 00:25:13.572 "copy": true, 00:25:13.572 "nvme_iov_md": false 00:25:13.572 }, 00:25:13.572 "memory_domains": [ 00:25:13.572 { 00:25:13.572 "dma_device_id": "system", 00:25:13.572 "dma_device_type": 1 00:25:13.572 }, 00:25:13.572 { 00:25:13.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.572 "dma_device_type": 2 00:25:13.572 } 00:25:13.572 ], 00:25:13.572 "driver_specific": {} 00:25:13.572 }' 00:25:13.572 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.572 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.572 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.572 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.842 02:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.842 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:14.100 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:14.100 "name": "BaseBdev4", 00:25:14.100 "aliases": [ 00:25:14.100 "c2f6d915-07f6-4ce7-b5fc-4ee5b261e4ce" 00:25:14.100 ], 00:25:14.100 "product_name": "Malloc disk", 00:25:14.100 "block_size": 512, 00:25:14.100 "num_blocks": 65536, 00:25:14.100 "uuid": "c2f6d915-07f6-4ce7-b5fc-4ee5b261e4ce", 00:25:14.100 "assigned_rate_limits": { 00:25:14.100 "rw_ios_per_sec": 0, 00:25:14.100 "rw_mbytes_per_sec": 0, 00:25:14.100 "r_mbytes_per_sec": 0, 00:25:14.100 "w_mbytes_per_sec": 0 00:25:14.100 }, 00:25:14.100 "claimed": true, 00:25:14.100 "claim_type": "exclusive_write", 00:25:14.100 "zoned": false, 00:25:14.100 "supported_io_types": { 00:25:14.100 "read": true, 00:25:14.100 "write": true, 00:25:14.100 "unmap": true, 00:25:14.100 "flush": true, 00:25:14.100 "reset": true, 00:25:14.100 "nvme_admin": false, 00:25:14.100 "nvme_io": false, 00:25:14.100 "nvme_io_md": false, 00:25:14.100 "write_zeroes": true, 00:25:14.100 "zcopy": true, 00:25:14.100 "get_zone_info": false, 00:25:14.100 "zone_management": false, 00:25:14.100 "zone_append": false, 00:25:14.100 "compare": false, 00:25:14.100 "compare_and_write": false, 00:25:14.100 "abort": true, 00:25:14.100 "seek_hole": false, 00:25:14.100 "seek_data": false, 00:25:14.100 "copy": true, 00:25:14.100 "nvme_iov_md": false 00:25:14.100 }, 00:25:14.100 "memory_domains": [ 00:25:14.100 { 00:25:14.100 "dma_device_id": "system", 00:25:14.100 "dma_device_type": 1 00:25:14.100 }, 00:25:14.100 { 00:25:14.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.100 "dma_device_type": 2 00:25:14.100 } 00:25:14.100 ], 00:25:14.100 "driver_specific": {} 00:25:14.100 }' 00:25:14.100 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.100 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.100 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:14.100 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:14.358 02:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:14.925 [2024-07-11 02:31:05.249611] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.925 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.184 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.184 "name": "Existed_Raid", 00:25:15.184 "uuid": "f86bee4a-fbb0-4c4e-bca2-46856f674a01", 00:25:15.184 "strip_size_kb": 0, 00:25:15.184 "state": "online", 00:25:15.184 "raid_level": "raid1", 00:25:15.184 "superblock": true, 00:25:15.184 "num_base_bdevs": 4, 00:25:15.184 "num_base_bdevs_discovered": 3, 00:25:15.184 "num_base_bdevs_operational": 3, 00:25:15.184 "base_bdevs_list": [ 00:25:15.184 { 00:25:15.184 "name": null, 00:25:15.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.184 "is_configured": false, 00:25:15.184 "data_offset": 2048, 00:25:15.184 "data_size": 63488 00:25:15.184 }, 00:25:15.184 { 00:25:15.184 "name": "BaseBdev2", 00:25:15.184 "uuid": "b7c83b33-909a-44af-88ba-afe0da1ddcb3", 00:25:15.184 "is_configured": true, 00:25:15.184 "data_offset": 2048, 00:25:15.184 "data_size": 63488 00:25:15.184 }, 00:25:15.184 { 00:25:15.184 "name": "BaseBdev3", 00:25:15.184 "uuid": "f45d5810-25aa-49d8-8e2c-0f78040764c0", 00:25:15.184 "is_configured": true, 00:25:15.184 "data_offset": 2048, 00:25:15.184 "data_size": 63488 00:25:15.184 }, 00:25:15.184 { 00:25:15.184 "name": "BaseBdev4", 00:25:15.184 "uuid": "c2f6d915-07f6-4ce7-b5fc-4ee5b261e4ce", 00:25:15.184 "is_configured": true, 00:25:15.184 "data_offset": 2048, 00:25:15.184 "data_size": 63488 00:25:15.184 } 00:25:15.184 ] 00:25:15.184 }' 00:25:15.184 02:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.184 02:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.749 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:15.749 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.750 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.750 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:16.009 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:16.009 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:16.009 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:16.576 [2024-07-11 02:31:06.887911] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:16.576 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:16.576 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:16.576 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.576 02:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:16.835 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:16.835 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:16.835 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:17.402 [2024-07-11 02:31:07.658197] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:17.402 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:17.402 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:17.402 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:17.402 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.661 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:17.661 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:17.661 02:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:18.229 [2024-07-11 02:31:08.426638] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:18.229 [2024-07-11 02:31:08.426733] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:18.229 [2024-07-11 02:31:08.437741] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.229 [2024-07-11 02:31:08.437793] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.229 [2024-07-11 02:31:08.437805] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe4cf90 name Existed_Raid, state offline 00:25:18.229 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:18.229 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:18.229 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.229 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:18.488 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:18.488 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:18.488 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:18.488 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:18.488 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:18.488 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:18.747 BaseBdev2 00:25:18.747 02:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:18.747 02:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:18.747 02:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:18.747 02:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:18.747 02:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:18.747 02:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:18.747 02:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:19.005 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:19.005 [ 00:25:19.005 { 00:25:19.005 "name": "BaseBdev2", 00:25:19.005 "aliases": [ 00:25:19.005 "9a4c88f7-f86b-46a8-aad8-5a497cb63b61" 00:25:19.005 ], 00:25:19.005 "product_name": "Malloc disk", 00:25:19.005 "block_size": 512, 00:25:19.005 "num_blocks": 65536, 00:25:19.005 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:19.005 "assigned_rate_limits": { 00:25:19.005 "rw_ios_per_sec": 0, 00:25:19.005 "rw_mbytes_per_sec": 0, 00:25:19.005 "r_mbytes_per_sec": 0, 00:25:19.005 "w_mbytes_per_sec": 0 00:25:19.005 }, 00:25:19.005 "claimed": false, 00:25:19.005 "zoned": false, 00:25:19.005 "supported_io_types": { 00:25:19.005 "read": true, 00:25:19.005 "write": true, 00:25:19.005 "unmap": true, 00:25:19.005 "flush": true, 00:25:19.005 "reset": true, 00:25:19.005 "nvme_admin": false, 00:25:19.005 "nvme_io": false, 00:25:19.005 "nvme_io_md": false, 00:25:19.005 "write_zeroes": true, 00:25:19.005 "zcopy": true, 00:25:19.005 "get_zone_info": false, 00:25:19.005 "zone_management": false, 00:25:19.005 "zone_append": false, 00:25:19.005 "compare": false, 00:25:19.005 "compare_and_write": false, 00:25:19.005 "abort": true, 00:25:19.005 "seek_hole": false, 00:25:19.005 "seek_data": false, 00:25:19.005 "copy": true, 00:25:19.005 "nvme_iov_md": false 00:25:19.005 }, 00:25:19.005 "memory_domains": [ 00:25:19.005 { 00:25:19.006 "dma_device_id": "system", 00:25:19.006 "dma_device_type": 1 00:25:19.006 }, 00:25:19.006 { 00:25:19.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.006 "dma_device_type": 2 00:25:19.006 } 00:25:19.006 ], 00:25:19.006 "driver_specific": {} 00:25:19.006 } 00:25:19.006 ] 00:25:19.006 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:19.006 02:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:19.006 02:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:19.264 BaseBdev3 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:19.264 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:19.521 02:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:19.779 [ 00:25:19.779 { 00:25:19.779 "name": "BaseBdev3", 00:25:19.779 "aliases": [ 00:25:19.779 "3da58857-7f30-4cef-b44e-4d4bf3c098cd" 00:25:19.779 ], 00:25:19.779 "product_name": "Malloc disk", 00:25:19.779 "block_size": 512, 00:25:19.779 "num_blocks": 65536, 00:25:19.779 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:19.779 "assigned_rate_limits": { 00:25:19.779 "rw_ios_per_sec": 0, 00:25:19.779 "rw_mbytes_per_sec": 0, 00:25:19.779 "r_mbytes_per_sec": 0, 00:25:19.779 "w_mbytes_per_sec": 0 00:25:19.779 }, 00:25:19.779 "claimed": false, 00:25:19.779 "zoned": false, 00:25:19.779 "supported_io_types": { 00:25:19.779 "read": true, 00:25:19.779 "write": true, 00:25:19.779 "unmap": true, 00:25:19.779 "flush": true, 00:25:19.779 "reset": true, 00:25:19.779 "nvme_admin": false, 00:25:19.779 "nvme_io": false, 00:25:19.779 "nvme_io_md": false, 00:25:19.779 "write_zeroes": true, 00:25:19.779 "zcopy": true, 00:25:19.779 "get_zone_info": false, 00:25:19.779 "zone_management": false, 00:25:19.779 "zone_append": false, 00:25:19.779 "compare": false, 00:25:19.779 "compare_and_write": false, 00:25:19.779 "abort": true, 00:25:19.779 "seek_hole": false, 00:25:19.779 "seek_data": false, 00:25:19.779 "copy": true, 00:25:19.779 "nvme_iov_md": false 00:25:19.779 }, 00:25:19.779 "memory_domains": [ 00:25:19.779 { 00:25:19.779 "dma_device_id": "system", 00:25:19.779 "dma_device_type": 1 00:25:19.779 }, 00:25:19.779 { 00:25:19.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.779 "dma_device_type": 2 00:25:19.779 } 00:25:19.779 ], 00:25:19.779 "driver_specific": {} 00:25:19.779 } 00:25:19.779 ] 00:25:19.779 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:19.779 02:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:19.779 02:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:19.779 02:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:20.036 BaseBdev4 00:25:20.036 02:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:20.036 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:20.036 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:20.036 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:20.036 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:20.036 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:20.036 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.293 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:20.551 [ 00:25:20.551 { 00:25:20.551 "name": "BaseBdev4", 00:25:20.551 "aliases": [ 00:25:20.552 "6171810c-d423-47e1-8598-068222ed933e" 00:25:20.552 ], 00:25:20.552 "product_name": "Malloc disk", 00:25:20.552 "block_size": 512, 00:25:20.552 "num_blocks": 65536, 00:25:20.552 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:20.552 "assigned_rate_limits": { 00:25:20.552 "rw_ios_per_sec": 0, 00:25:20.552 "rw_mbytes_per_sec": 0, 00:25:20.552 "r_mbytes_per_sec": 0, 00:25:20.552 "w_mbytes_per_sec": 0 00:25:20.552 }, 00:25:20.552 "claimed": false, 00:25:20.552 "zoned": false, 00:25:20.552 "supported_io_types": { 00:25:20.552 "read": true, 00:25:20.552 "write": true, 00:25:20.552 "unmap": true, 00:25:20.552 "flush": true, 00:25:20.552 "reset": true, 00:25:20.552 "nvme_admin": false, 00:25:20.552 "nvme_io": false, 00:25:20.552 "nvme_io_md": false, 00:25:20.552 "write_zeroes": true, 00:25:20.552 "zcopy": true, 00:25:20.552 "get_zone_info": false, 00:25:20.552 "zone_management": false, 00:25:20.552 "zone_append": false, 00:25:20.552 "compare": false, 00:25:20.552 "compare_and_write": false, 00:25:20.552 "abort": true, 00:25:20.552 "seek_hole": false, 00:25:20.552 "seek_data": false, 00:25:20.552 "copy": true, 00:25:20.552 "nvme_iov_md": false 00:25:20.552 }, 00:25:20.552 "memory_domains": [ 00:25:20.552 { 00:25:20.552 "dma_device_id": "system", 00:25:20.552 "dma_device_type": 1 00:25:20.552 }, 00:25:20.552 { 00:25:20.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.552 "dma_device_type": 2 00:25:20.552 } 00:25:20.552 ], 00:25:20.552 "driver_specific": {} 00:25:20.552 } 00:25:20.552 ] 00:25:20.552 02:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:20.552 02:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:20.552 02:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:20.552 02:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:20.809 [2024-07-11 02:31:11.121829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:20.809 [2024-07-11 02:31:11.121874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:20.809 [2024-07-11 02:31:11.121894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:20.809 [2024-07-11 02:31:11.123250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:20.809 [2024-07-11 02:31:11.123293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.809 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.067 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:21.067 "name": "Existed_Raid", 00:25:21.067 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:21.067 "strip_size_kb": 0, 00:25:21.067 "state": "configuring", 00:25:21.067 "raid_level": "raid1", 00:25:21.067 "superblock": true, 00:25:21.067 "num_base_bdevs": 4, 00:25:21.067 "num_base_bdevs_discovered": 3, 00:25:21.067 "num_base_bdevs_operational": 4, 00:25:21.067 "base_bdevs_list": [ 00:25:21.067 { 00:25:21.067 "name": "BaseBdev1", 00:25:21.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.067 "is_configured": false, 00:25:21.067 "data_offset": 0, 00:25:21.067 "data_size": 0 00:25:21.067 }, 00:25:21.067 { 00:25:21.067 "name": "BaseBdev2", 00:25:21.067 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:21.067 "is_configured": true, 00:25:21.067 "data_offset": 2048, 00:25:21.067 "data_size": 63488 00:25:21.067 }, 00:25:21.067 { 00:25:21.067 "name": "BaseBdev3", 00:25:21.067 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:21.067 "is_configured": true, 00:25:21.067 "data_offset": 2048, 00:25:21.067 "data_size": 63488 00:25:21.067 }, 00:25:21.067 { 00:25:21.067 "name": "BaseBdev4", 00:25:21.067 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:21.067 "is_configured": true, 00:25:21.067 "data_offset": 2048, 00:25:21.067 "data_size": 63488 00:25:21.067 } 00:25:21.067 ] 00:25:21.067 }' 00:25:21.067 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:21.067 02:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.633 02:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:21.891 [2024-07-11 02:31:12.212691] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:21.891 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.892 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.151 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.151 "name": "Existed_Raid", 00:25:22.151 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:22.151 "strip_size_kb": 0, 00:25:22.151 "state": "configuring", 00:25:22.151 "raid_level": "raid1", 00:25:22.151 "superblock": true, 00:25:22.151 "num_base_bdevs": 4, 00:25:22.151 "num_base_bdevs_discovered": 2, 00:25:22.151 "num_base_bdevs_operational": 4, 00:25:22.151 "base_bdevs_list": [ 00:25:22.151 { 00:25:22.151 "name": "BaseBdev1", 00:25:22.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.151 "is_configured": false, 00:25:22.151 "data_offset": 0, 00:25:22.151 "data_size": 0 00:25:22.151 }, 00:25:22.151 { 00:25:22.151 "name": null, 00:25:22.151 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:22.151 "is_configured": false, 00:25:22.151 "data_offset": 2048, 00:25:22.151 "data_size": 63488 00:25:22.151 }, 00:25:22.151 { 00:25:22.151 "name": "BaseBdev3", 00:25:22.151 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:22.151 "is_configured": true, 00:25:22.151 "data_offset": 2048, 00:25:22.151 "data_size": 63488 00:25:22.151 }, 00:25:22.151 { 00:25:22.151 "name": "BaseBdev4", 00:25:22.151 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:22.151 "is_configured": true, 00:25:22.151 "data_offset": 2048, 00:25:22.151 "data_size": 63488 00:25:22.151 } 00:25:22.151 ] 00:25:22.151 }' 00:25:22.151 02:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.151 02:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.719 02:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.719 02:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:22.978 02:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:22.978 02:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:23.546 [2024-07-11 02:31:13.820287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.546 BaseBdev1 00:25:23.546 02:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:23.546 02:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:23.546 02:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:23.546 02:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:23.546 02:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:23.546 02:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:23.546 02:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:23.805 02:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:24.374 [ 00:25:24.374 { 00:25:24.374 "name": "BaseBdev1", 00:25:24.374 "aliases": [ 00:25:24.374 "de84aafa-a81c-43bc-905d-ea8266acc9fb" 00:25:24.374 ], 00:25:24.374 "product_name": "Malloc disk", 00:25:24.374 "block_size": 512, 00:25:24.374 "num_blocks": 65536, 00:25:24.374 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:24.374 "assigned_rate_limits": { 00:25:24.374 "rw_ios_per_sec": 0, 00:25:24.374 "rw_mbytes_per_sec": 0, 00:25:24.374 "r_mbytes_per_sec": 0, 00:25:24.374 "w_mbytes_per_sec": 0 00:25:24.374 }, 00:25:24.374 "claimed": true, 00:25:24.374 "claim_type": "exclusive_write", 00:25:24.374 "zoned": false, 00:25:24.374 "supported_io_types": { 00:25:24.374 "read": true, 00:25:24.374 "write": true, 00:25:24.374 "unmap": true, 00:25:24.374 "flush": true, 00:25:24.374 "reset": true, 00:25:24.374 "nvme_admin": false, 00:25:24.374 "nvme_io": false, 00:25:24.374 "nvme_io_md": false, 00:25:24.374 "write_zeroes": true, 00:25:24.374 "zcopy": true, 00:25:24.374 "get_zone_info": false, 00:25:24.374 "zone_management": false, 00:25:24.374 "zone_append": false, 00:25:24.374 "compare": false, 00:25:24.374 "compare_and_write": false, 00:25:24.374 "abort": true, 00:25:24.374 "seek_hole": false, 00:25:24.374 "seek_data": false, 00:25:24.374 "copy": true, 00:25:24.374 "nvme_iov_md": false 00:25:24.374 }, 00:25:24.374 "memory_domains": [ 00:25:24.374 { 00:25:24.374 "dma_device_id": "system", 00:25:24.374 "dma_device_type": 1 00:25:24.374 }, 00:25:24.374 { 00:25:24.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.374 "dma_device_type": 2 00:25:24.374 } 00:25:24.374 ], 00:25:24.374 "driver_specific": {} 00:25:24.374 } 00:25:24.374 ] 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.374 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.633 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.633 "name": "Existed_Raid", 00:25:24.633 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:24.633 "strip_size_kb": 0, 00:25:24.633 "state": "configuring", 00:25:24.633 "raid_level": "raid1", 00:25:24.633 "superblock": true, 00:25:24.633 "num_base_bdevs": 4, 00:25:24.633 "num_base_bdevs_discovered": 3, 00:25:24.634 "num_base_bdevs_operational": 4, 00:25:24.634 "base_bdevs_list": [ 00:25:24.634 { 00:25:24.634 "name": "BaseBdev1", 00:25:24.634 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:24.634 "is_configured": true, 00:25:24.634 "data_offset": 2048, 00:25:24.634 "data_size": 63488 00:25:24.634 }, 00:25:24.634 { 00:25:24.634 "name": null, 00:25:24.634 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:24.634 "is_configured": false, 00:25:24.634 "data_offset": 2048, 00:25:24.634 "data_size": 63488 00:25:24.634 }, 00:25:24.634 { 00:25:24.634 "name": "BaseBdev3", 00:25:24.634 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:24.634 "is_configured": true, 00:25:24.634 "data_offset": 2048, 00:25:24.634 "data_size": 63488 00:25:24.634 }, 00:25:24.634 { 00:25:24.634 "name": "BaseBdev4", 00:25:24.634 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:24.634 "is_configured": true, 00:25:24.634 "data_offset": 2048, 00:25:24.634 "data_size": 63488 00:25:24.634 } 00:25:24.634 ] 00:25:24.634 }' 00:25:24.634 02:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.634 02:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.202 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:25.202 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.461 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:25.461 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:25.720 [2024-07-11 02:31:15.937952] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:25.720 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:25.720 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.720 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:25.720 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.720 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.720 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:25.720 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.721 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.721 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.721 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.721 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.721 02:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.980 02:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.980 "name": "Existed_Raid", 00:25:25.980 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:25.980 "strip_size_kb": 0, 00:25:25.980 "state": "configuring", 00:25:25.980 "raid_level": "raid1", 00:25:25.980 "superblock": true, 00:25:25.980 "num_base_bdevs": 4, 00:25:25.980 "num_base_bdevs_discovered": 2, 00:25:25.980 "num_base_bdevs_operational": 4, 00:25:25.980 "base_bdevs_list": [ 00:25:25.980 { 00:25:25.980 "name": "BaseBdev1", 00:25:25.980 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:25.980 "is_configured": true, 00:25:25.980 "data_offset": 2048, 00:25:25.980 "data_size": 63488 00:25:25.980 }, 00:25:25.980 { 00:25:25.980 "name": null, 00:25:25.980 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:25.980 "is_configured": false, 00:25:25.980 "data_offset": 2048, 00:25:25.980 "data_size": 63488 00:25:25.980 }, 00:25:25.980 { 00:25:25.980 "name": null, 00:25:25.980 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:25.980 "is_configured": false, 00:25:25.980 "data_offset": 2048, 00:25:25.980 "data_size": 63488 00:25:25.980 }, 00:25:25.980 { 00:25:25.980 "name": "BaseBdev4", 00:25:25.980 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:25.980 "is_configured": true, 00:25:25.980 "data_offset": 2048, 00:25:25.980 "data_size": 63488 00:25:25.980 } 00:25:25.980 ] 00:25:25.980 }' 00:25:25.980 02:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.980 02:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.549 02:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.549 02:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:26.809 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:26.809 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:27.068 [2024-07-11 02:31:17.269509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.068 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.327 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.327 "name": "Existed_Raid", 00:25:27.327 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:27.327 "strip_size_kb": 0, 00:25:27.327 "state": "configuring", 00:25:27.327 "raid_level": "raid1", 00:25:27.327 "superblock": true, 00:25:27.327 "num_base_bdevs": 4, 00:25:27.327 "num_base_bdevs_discovered": 3, 00:25:27.327 "num_base_bdevs_operational": 4, 00:25:27.327 "base_bdevs_list": [ 00:25:27.327 { 00:25:27.327 "name": "BaseBdev1", 00:25:27.327 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:27.327 "is_configured": true, 00:25:27.327 "data_offset": 2048, 00:25:27.327 "data_size": 63488 00:25:27.327 }, 00:25:27.327 { 00:25:27.327 "name": null, 00:25:27.327 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:27.327 "is_configured": false, 00:25:27.327 "data_offset": 2048, 00:25:27.327 "data_size": 63488 00:25:27.327 }, 00:25:27.327 { 00:25:27.327 "name": "BaseBdev3", 00:25:27.327 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:27.327 "is_configured": true, 00:25:27.327 "data_offset": 2048, 00:25:27.327 "data_size": 63488 00:25:27.327 }, 00:25:27.327 { 00:25:27.327 "name": "BaseBdev4", 00:25:27.327 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:27.327 "is_configured": true, 00:25:27.327 "data_offset": 2048, 00:25:27.327 "data_size": 63488 00:25:27.327 } 00:25:27.327 ] 00:25:27.327 }' 00:25:27.327 02:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.327 02:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.895 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.895 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:28.153 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:28.153 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:28.412 [2024-07-11 02:31:18.617302] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.412 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.673 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.673 "name": "Existed_Raid", 00:25:28.673 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:28.673 "strip_size_kb": 0, 00:25:28.673 "state": "configuring", 00:25:28.673 "raid_level": "raid1", 00:25:28.673 "superblock": true, 00:25:28.673 "num_base_bdevs": 4, 00:25:28.673 "num_base_bdevs_discovered": 2, 00:25:28.673 "num_base_bdevs_operational": 4, 00:25:28.673 "base_bdevs_list": [ 00:25:28.673 { 00:25:28.673 "name": null, 00:25:28.673 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:28.673 "is_configured": false, 00:25:28.673 "data_offset": 2048, 00:25:28.673 "data_size": 63488 00:25:28.673 }, 00:25:28.673 { 00:25:28.673 "name": null, 00:25:28.673 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:28.673 "is_configured": false, 00:25:28.673 "data_offset": 2048, 00:25:28.673 "data_size": 63488 00:25:28.673 }, 00:25:28.673 { 00:25:28.673 "name": "BaseBdev3", 00:25:28.673 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:28.673 "is_configured": true, 00:25:28.673 "data_offset": 2048, 00:25:28.674 "data_size": 63488 00:25:28.674 }, 00:25:28.674 { 00:25:28.674 "name": "BaseBdev4", 00:25:28.674 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:28.674 "is_configured": true, 00:25:28.674 "data_offset": 2048, 00:25:28.674 "data_size": 63488 00:25:28.674 } 00:25:28.674 ] 00:25:28.674 }' 00:25:28.674 02:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.674 02:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.241 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.241 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:29.500 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:29.500 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:29.759 [2024-07-11 02:31:19.964164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.759 02:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.018 02:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.018 "name": "Existed_Raid", 00:25:30.018 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:30.018 "strip_size_kb": 0, 00:25:30.018 "state": "configuring", 00:25:30.018 "raid_level": "raid1", 00:25:30.018 "superblock": true, 00:25:30.018 "num_base_bdevs": 4, 00:25:30.018 "num_base_bdevs_discovered": 3, 00:25:30.018 "num_base_bdevs_operational": 4, 00:25:30.018 "base_bdevs_list": [ 00:25:30.018 { 00:25:30.018 "name": null, 00:25:30.018 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:30.018 "is_configured": false, 00:25:30.018 "data_offset": 2048, 00:25:30.018 "data_size": 63488 00:25:30.018 }, 00:25:30.018 { 00:25:30.018 "name": "BaseBdev2", 00:25:30.018 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:30.018 "is_configured": true, 00:25:30.018 "data_offset": 2048, 00:25:30.018 "data_size": 63488 00:25:30.018 }, 00:25:30.018 { 00:25:30.018 "name": "BaseBdev3", 00:25:30.018 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:30.018 "is_configured": true, 00:25:30.018 "data_offset": 2048, 00:25:30.018 "data_size": 63488 00:25:30.018 }, 00:25:30.018 { 00:25:30.018 "name": "BaseBdev4", 00:25:30.018 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:30.018 "is_configured": true, 00:25:30.018 "data_offset": 2048, 00:25:30.018 "data_size": 63488 00:25:30.018 } 00:25:30.018 ] 00:25:30.018 }' 00:25:30.018 02:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.018 02:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.585 02:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.585 02:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:30.844 02:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:30.844 02:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.844 02:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:31.103 02:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u de84aafa-a81c-43bc-905d-ea8266acc9fb 00:25:31.362 [2024-07-11 02:31:21.587717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:31.362 [2024-07-11 02:31:21.587885] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc9d160 00:25:31.362 [2024-07-11 02:31:21.587899] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:31.362 [2024-07-11 02:31:21.588074] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc86b50 00:25:31.362 [2024-07-11 02:31:21.588194] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc9d160 00:25:31.362 [2024-07-11 02:31:21.588204] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xc9d160 00:25:31.362 [2024-07-11 02:31:21.588294] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.362 NewBaseBdev 00:25:31.362 02:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:31.362 02:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:31.362 02:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:31.362 02:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:31.362 02:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:31.362 02:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:31.362 02:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:31.622 02:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:31.881 [ 00:25:31.881 { 00:25:31.881 "name": "NewBaseBdev", 00:25:31.881 "aliases": [ 00:25:31.881 "de84aafa-a81c-43bc-905d-ea8266acc9fb" 00:25:31.881 ], 00:25:31.881 "product_name": "Malloc disk", 00:25:31.881 "block_size": 512, 00:25:31.881 "num_blocks": 65536, 00:25:31.881 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:31.881 "assigned_rate_limits": { 00:25:31.881 "rw_ios_per_sec": 0, 00:25:31.881 "rw_mbytes_per_sec": 0, 00:25:31.881 "r_mbytes_per_sec": 0, 00:25:31.881 "w_mbytes_per_sec": 0 00:25:31.881 }, 00:25:31.881 "claimed": true, 00:25:31.881 "claim_type": "exclusive_write", 00:25:31.881 "zoned": false, 00:25:31.881 "supported_io_types": { 00:25:31.881 "read": true, 00:25:31.881 "write": true, 00:25:31.881 "unmap": true, 00:25:31.881 "flush": true, 00:25:31.881 "reset": true, 00:25:31.881 "nvme_admin": false, 00:25:31.881 "nvme_io": false, 00:25:31.881 "nvme_io_md": false, 00:25:31.881 "write_zeroes": true, 00:25:31.881 "zcopy": true, 00:25:31.881 "get_zone_info": false, 00:25:31.881 "zone_management": false, 00:25:31.881 "zone_append": false, 00:25:31.881 "compare": false, 00:25:31.881 "compare_and_write": false, 00:25:31.881 "abort": true, 00:25:31.881 "seek_hole": false, 00:25:31.881 "seek_data": false, 00:25:31.881 "copy": true, 00:25:31.881 "nvme_iov_md": false 00:25:31.881 }, 00:25:31.881 "memory_domains": [ 00:25:31.881 { 00:25:31.881 "dma_device_id": "system", 00:25:31.881 "dma_device_type": 1 00:25:31.881 }, 00:25:31.881 { 00:25:31.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.881 "dma_device_type": 2 00:25:31.881 } 00:25:31.881 ], 00:25:31.881 "driver_specific": {} 00:25:31.881 } 00:25:31.881 ] 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.882 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.142 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.142 "name": "Existed_Raid", 00:25:32.142 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:32.142 "strip_size_kb": 0, 00:25:32.142 "state": "online", 00:25:32.142 "raid_level": "raid1", 00:25:32.142 "superblock": true, 00:25:32.142 "num_base_bdevs": 4, 00:25:32.142 "num_base_bdevs_discovered": 4, 00:25:32.142 "num_base_bdevs_operational": 4, 00:25:32.142 "base_bdevs_list": [ 00:25:32.142 { 00:25:32.142 "name": "NewBaseBdev", 00:25:32.142 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:32.142 "is_configured": true, 00:25:32.142 "data_offset": 2048, 00:25:32.142 "data_size": 63488 00:25:32.142 }, 00:25:32.142 { 00:25:32.142 "name": "BaseBdev2", 00:25:32.142 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:32.142 "is_configured": true, 00:25:32.142 "data_offset": 2048, 00:25:32.142 "data_size": 63488 00:25:32.142 }, 00:25:32.142 { 00:25:32.142 "name": "BaseBdev3", 00:25:32.142 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:32.142 "is_configured": true, 00:25:32.142 "data_offset": 2048, 00:25:32.142 "data_size": 63488 00:25:32.142 }, 00:25:32.142 { 00:25:32.142 "name": "BaseBdev4", 00:25:32.142 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:32.142 "is_configured": true, 00:25:32.142 "data_offset": 2048, 00:25:32.142 "data_size": 63488 00:25:32.142 } 00:25:32.142 ] 00:25:32.142 }' 00:25:32.142 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.142 02:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:32.711 02:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:32.971 [2024-07-11 02:31:23.244472] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:32.971 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:32.971 "name": "Existed_Raid", 00:25:32.971 "aliases": [ 00:25:32.971 "c10974eb-b9a4-4bb2-92f4-570a7ded57f3" 00:25:32.971 ], 00:25:32.971 "product_name": "Raid Volume", 00:25:32.971 "block_size": 512, 00:25:32.971 "num_blocks": 63488, 00:25:32.971 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:32.971 "assigned_rate_limits": { 00:25:32.971 "rw_ios_per_sec": 0, 00:25:32.971 "rw_mbytes_per_sec": 0, 00:25:32.971 "r_mbytes_per_sec": 0, 00:25:32.971 "w_mbytes_per_sec": 0 00:25:32.971 }, 00:25:32.971 "claimed": false, 00:25:32.971 "zoned": false, 00:25:32.971 "supported_io_types": { 00:25:32.971 "read": true, 00:25:32.971 "write": true, 00:25:32.971 "unmap": false, 00:25:32.971 "flush": false, 00:25:32.971 "reset": true, 00:25:32.971 "nvme_admin": false, 00:25:32.971 "nvme_io": false, 00:25:32.971 "nvme_io_md": false, 00:25:32.971 "write_zeroes": true, 00:25:32.971 "zcopy": false, 00:25:32.971 "get_zone_info": false, 00:25:32.971 "zone_management": false, 00:25:32.971 "zone_append": false, 00:25:32.971 "compare": false, 00:25:32.971 "compare_and_write": false, 00:25:32.971 "abort": false, 00:25:32.971 "seek_hole": false, 00:25:32.971 "seek_data": false, 00:25:32.971 "copy": false, 00:25:32.971 "nvme_iov_md": false 00:25:32.971 }, 00:25:32.971 "memory_domains": [ 00:25:32.971 { 00:25:32.971 "dma_device_id": "system", 00:25:32.971 "dma_device_type": 1 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.971 "dma_device_type": 2 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "dma_device_id": "system", 00:25:32.971 "dma_device_type": 1 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.971 "dma_device_type": 2 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "dma_device_id": "system", 00:25:32.971 "dma_device_type": 1 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.971 "dma_device_type": 2 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "dma_device_id": "system", 00:25:32.971 "dma_device_type": 1 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.971 "dma_device_type": 2 00:25:32.971 } 00:25:32.971 ], 00:25:32.971 "driver_specific": { 00:25:32.971 "raid": { 00:25:32.971 "uuid": "c10974eb-b9a4-4bb2-92f4-570a7ded57f3", 00:25:32.971 "strip_size_kb": 0, 00:25:32.971 "state": "online", 00:25:32.971 "raid_level": "raid1", 00:25:32.971 "superblock": true, 00:25:32.971 "num_base_bdevs": 4, 00:25:32.971 "num_base_bdevs_discovered": 4, 00:25:32.971 "num_base_bdevs_operational": 4, 00:25:32.971 "base_bdevs_list": [ 00:25:32.971 { 00:25:32.971 "name": "NewBaseBdev", 00:25:32.971 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:32.971 "is_configured": true, 00:25:32.971 "data_offset": 2048, 00:25:32.971 "data_size": 63488 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "name": "BaseBdev2", 00:25:32.971 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:32.971 "is_configured": true, 00:25:32.971 "data_offset": 2048, 00:25:32.971 "data_size": 63488 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "name": "BaseBdev3", 00:25:32.971 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:32.971 "is_configured": true, 00:25:32.971 "data_offset": 2048, 00:25:32.971 "data_size": 63488 00:25:32.971 }, 00:25:32.971 { 00:25:32.971 "name": "BaseBdev4", 00:25:32.971 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:32.971 "is_configured": true, 00:25:32.971 "data_offset": 2048, 00:25:32.971 "data_size": 63488 00:25:32.971 } 00:25:32.971 ] 00:25:32.971 } 00:25:32.971 } 00:25:32.971 }' 00:25:32.971 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:32.971 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:32.971 BaseBdev2 00:25:32.971 BaseBdev3 00:25:32.971 BaseBdev4' 00:25:32.971 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.971 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:32.971 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:33.231 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:33.231 "name": "NewBaseBdev", 00:25:33.231 "aliases": [ 00:25:33.231 "de84aafa-a81c-43bc-905d-ea8266acc9fb" 00:25:33.231 ], 00:25:33.231 "product_name": "Malloc disk", 00:25:33.231 "block_size": 512, 00:25:33.231 "num_blocks": 65536, 00:25:33.231 "uuid": "de84aafa-a81c-43bc-905d-ea8266acc9fb", 00:25:33.231 "assigned_rate_limits": { 00:25:33.231 "rw_ios_per_sec": 0, 00:25:33.231 "rw_mbytes_per_sec": 0, 00:25:33.231 "r_mbytes_per_sec": 0, 00:25:33.231 "w_mbytes_per_sec": 0 00:25:33.231 }, 00:25:33.231 "claimed": true, 00:25:33.231 "claim_type": "exclusive_write", 00:25:33.231 "zoned": false, 00:25:33.231 "supported_io_types": { 00:25:33.231 "read": true, 00:25:33.231 "write": true, 00:25:33.231 "unmap": true, 00:25:33.231 "flush": true, 00:25:33.231 "reset": true, 00:25:33.231 "nvme_admin": false, 00:25:33.231 "nvme_io": false, 00:25:33.231 "nvme_io_md": false, 00:25:33.231 "write_zeroes": true, 00:25:33.231 "zcopy": true, 00:25:33.231 "get_zone_info": false, 00:25:33.231 "zone_management": false, 00:25:33.231 "zone_append": false, 00:25:33.231 "compare": false, 00:25:33.231 "compare_and_write": false, 00:25:33.231 "abort": true, 00:25:33.231 "seek_hole": false, 00:25:33.231 "seek_data": false, 00:25:33.231 "copy": true, 00:25:33.231 "nvme_iov_md": false 00:25:33.231 }, 00:25:33.231 "memory_domains": [ 00:25:33.231 { 00:25:33.231 "dma_device_id": "system", 00:25:33.231 "dma_device_type": 1 00:25:33.231 }, 00:25:33.231 { 00:25:33.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.231 "dma_device_type": 2 00:25:33.231 } 00:25:33.231 ], 00:25:33.231 "driver_specific": {} 00:25:33.231 }' 00:25:33.231 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.231 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.489 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.748 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.748 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:33.748 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:33.748 02:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:34.008 "name": "BaseBdev2", 00:25:34.008 "aliases": [ 00:25:34.008 "9a4c88f7-f86b-46a8-aad8-5a497cb63b61" 00:25:34.008 ], 00:25:34.008 "product_name": "Malloc disk", 00:25:34.008 "block_size": 512, 00:25:34.008 "num_blocks": 65536, 00:25:34.008 "uuid": "9a4c88f7-f86b-46a8-aad8-5a497cb63b61", 00:25:34.008 "assigned_rate_limits": { 00:25:34.008 "rw_ios_per_sec": 0, 00:25:34.008 "rw_mbytes_per_sec": 0, 00:25:34.008 "r_mbytes_per_sec": 0, 00:25:34.008 "w_mbytes_per_sec": 0 00:25:34.008 }, 00:25:34.008 "claimed": true, 00:25:34.008 "claim_type": "exclusive_write", 00:25:34.008 "zoned": false, 00:25:34.008 "supported_io_types": { 00:25:34.008 "read": true, 00:25:34.008 "write": true, 00:25:34.008 "unmap": true, 00:25:34.008 "flush": true, 00:25:34.008 "reset": true, 00:25:34.008 "nvme_admin": false, 00:25:34.008 "nvme_io": false, 00:25:34.008 "nvme_io_md": false, 00:25:34.008 "write_zeroes": true, 00:25:34.008 "zcopy": true, 00:25:34.008 "get_zone_info": false, 00:25:34.008 "zone_management": false, 00:25:34.008 "zone_append": false, 00:25:34.008 "compare": false, 00:25:34.008 "compare_and_write": false, 00:25:34.008 "abort": true, 00:25:34.008 "seek_hole": false, 00:25:34.008 "seek_data": false, 00:25:34.008 "copy": true, 00:25:34.008 "nvme_iov_md": false 00:25:34.008 }, 00:25:34.008 "memory_domains": [ 00:25:34.008 { 00:25:34.008 "dma_device_id": "system", 00:25:34.008 "dma_device_type": 1 00:25:34.008 }, 00:25:34.008 { 00:25:34.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.008 "dma_device_type": 2 00:25:34.008 } 00:25:34.008 ], 00:25:34.008 "driver_specific": {} 00:25:34.008 }' 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.008 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.267 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.267 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.267 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.267 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.267 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:34.267 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:34.267 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:34.526 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:34.526 "name": "BaseBdev3", 00:25:34.526 "aliases": [ 00:25:34.526 "3da58857-7f30-4cef-b44e-4d4bf3c098cd" 00:25:34.526 ], 00:25:34.526 "product_name": "Malloc disk", 00:25:34.526 "block_size": 512, 00:25:34.526 "num_blocks": 65536, 00:25:34.526 "uuid": "3da58857-7f30-4cef-b44e-4d4bf3c098cd", 00:25:34.526 "assigned_rate_limits": { 00:25:34.526 "rw_ios_per_sec": 0, 00:25:34.526 "rw_mbytes_per_sec": 0, 00:25:34.526 "r_mbytes_per_sec": 0, 00:25:34.526 "w_mbytes_per_sec": 0 00:25:34.526 }, 00:25:34.526 "claimed": true, 00:25:34.526 "claim_type": "exclusive_write", 00:25:34.526 "zoned": false, 00:25:34.526 "supported_io_types": { 00:25:34.526 "read": true, 00:25:34.526 "write": true, 00:25:34.526 "unmap": true, 00:25:34.526 "flush": true, 00:25:34.526 "reset": true, 00:25:34.526 "nvme_admin": false, 00:25:34.526 "nvme_io": false, 00:25:34.526 "nvme_io_md": false, 00:25:34.526 "write_zeroes": true, 00:25:34.526 "zcopy": true, 00:25:34.526 "get_zone_info": false, 00:25:34.526 "zone_management": false, 00:25:34.526 "zone_append": false, 00:25:34.526 "compare": false, 00:25:34.526 "compare_and_write": false, 00:25:34.526 "abort": true, 00:25:34.526 "seek_hole": false, 00:25:34.526 "seek_data": false, 00:25:34.526 "copy": true, 00:25:34.526 "nvme_iov_md": false 00:25:34.526 }, 00:25:34.526 "memory_domains": [ 00:25:34.526 { 00:25:34.526 "dma_device_id": "system", 00:25:34.526 "dma_device_type": 1 00:25:34.526 }, 00:25:34.526 { 00:25:34.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.526 "dma_device_type": 2 00:25:34.526 } 00:25:34.526 ], 00:25:34.526 "driver_specific": {} 00:25:34.526 }' 00:25:34.526 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.526 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.526 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:34.526 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.526 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.784 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:34.785 02:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:34.785 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:35.044 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:35.044 "name": "BaseBdev4", 00:25:35.044 "aliases": [ 00:25:35.044 "6171810c-d423-47e1-8598-068222ed933e" 00:25:35.044 ], 00:25:35.044 "product_name": "Malloc disk", 00:25:35.044 "block_size": 512, 00:25:35.044 "num_blocks": 65536, 00:25:35.044 "uuid": "6171810c-d423-47e1-8598-068222ed933e", 00:25:35.044 "assigned_rate_limits": { 00:25:35.044 "rw_ios_per_sec": 0, 00:25:35.044 "rw_mbytes_per_sec": 0, 00:25:35.044 "r_mbytes_per_sec": 0, 00:25:35.044 "w_mbytes_per_sec": 0 00:25:35.044 }, 00:25:35.044 "claimed": true, 00:25:35.044 "claim_type": "exclusive_write", 00:25:35.044 "zoned": false, 00:25:35.044 "supported_io_types": { 00:25:35.044 "read": true, 00:25:35.044 "write": true, 00:25:35.044 "unmap": true, 00:25:35.044 "flush": true, 00:25:35.044 "reset": true, 00:25:35.044 "nvme_admin": false, 00:25:35.044 "nvme_io": false, 00:25:35.044 "nvme_io_md": false, 00:25:35.044 "write_zeroes": true, 00:25:35.044 "zcopy": true, 00:25:35.044 "get_zone_info": false, 00:25:35.044 "zone_management": false, 00:25:35.044 "zone_append": false, 00:25:35.044 "compare": false, 00:25:35.044 "compare_and_write": false, 00:25:35.044 "abort": true, 00:25:35.044 "seek_hole": false, 00:25:35.044 "seek_data": false, 00:25:35.044 "copy": true, 00:25:35.044 "nvme_iov_md": false 00:25:35.044 }, 00:25:35.044 "memory_domains": [ 00:25:35.044 { 00:25:35.044 "dma_device_id": "system", 00:25:35.044 "dma_device_type": 1 00:25:35.044 }, 00:25:35.044 { 00:25:35.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.044 "dma_device_type": 2 00:25:35.044 } 00:25:35.044 ], 00:25:35.044 "driver_specific": {} 00:25:35.044 }' 00:25:35.044 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.303 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.562 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.562 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:35.562 02:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:35.821 [2024-07-11 02:31:26.007487] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:35.821 [2024-07-11 02:31:26.007516] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.821 [2024-07-11 02:31:26.007572] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.821 [2024-07-11 02:31:26.007842] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.821 [2024-07-11 02:31:26.007862] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc9d160 name Existed_Raid, state offline 00:25:35.821 02:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1994087 00:25:35.821 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1994087 ']' 00:25:35.821 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1994087 00:25:35.821 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:25:35.821 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.821 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1994087 00:25:35.822 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:35.822 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:35.822 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1994087' 00:25:35.822 killing process with pid 1994087 00:25:35.822 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1994087 00:25:35.822 [2024-07-11 02:31:26.095497] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:35.822 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1994087 00:25:35.822 [2024-07-11 02:31:26.134450] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:36.081 02:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:36.081 00:25:36.081 real 0m33.758s 00:25:36.081 user 1m1.935s 00:25:36.081 sys 0m6.112s 00:25:36.081 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:36.081 02:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.081 ************************************ 00:25:36.081 END TEST raid_state_function_test_sb 00:25:36.081 ************************************ 00:25:36.081 02:31:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:36.081 02:31:26 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:25:36.081 02:31:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:36.081 02:31:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.081 02:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.081 ************************************ 00:25:36.081 START TEST raid_superblock_test 00:25:36.081 ************************************ 00:25:36.081 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:25:36.081 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:25:36.081 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1999343 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1999343 /var/tmp/spdk-raid.sock 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1999343 ']' 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:36.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.082 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.082 [2024-07-11 02:31:26.486400] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:25:36.082 [2024-07-11 02:31:26.486462] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1999343 ] 00:25:36.341 [2024-07-11 02:31:26.618274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.341 [2024-07-11 02:31:26.669401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.341 [2024-07-11 02:31:26.744843] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:36.341 [2024-07-11 02:31:26.744877] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:36.600 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:36.601 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:36.601 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:36.601 02:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:36.860 malloc1 00:25:36.860 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:36.860 [2024-07-11 02:31:27.274366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:36.860 [2024-07-11 02:31:27.274412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.860 [2024-07-11 02:31:27.274433] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x254ade0 00:25:36.860 [2024-07-11 02:31:27.274446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.860 [2024-07-11 02:31:27.276116] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.860 [2024-07-11 02:31:27.276146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:36.860 pt1 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:37.134 malloc2 00:25:37.134 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:37.468 [2024-07-11 02:31:27.772398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:37.468 [2024-07-11 02:31:27.772440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.468 [2024-07-11 02:31:27.772457] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2542380 00:25:37.468 [2024-07-11 02:31:27.772470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.468 [2024-07-11 02:31:27.773846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.468 [2024-07-11 02:31:27.773874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:37.468 pt2 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:37.468 02:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:37.727 malloc3 00:25:37.727 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:37.987 [2024-07-11 02:31:28.283414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:37.987 [2024-07-11 02:31:28.283459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.987 [2024-07-11 02:31:28.283476] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2544fb0 00:25:37.987 [2024-07-11 02:31:28.283489] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.987 [2024-07-11 02:31:28.285023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.987 [2024-07-11 02:31:28.285052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:37.987 pt3 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:37.987 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:38.246 malloc4 00:25:38.246 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:38.505 [2024-07-11 02:31:28.782445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:38.505 [2024-07-11 02:31:28.782491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.505 [2024-07-11 02:31:28.782512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2546760 00:25:38.505 [2024-07-11 02:31:28.782525] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.505 [2024-07-11 02:31:28.784070] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.505 [2024-07-11 02:31:28.784115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:38.505 pt4 00:25:38.505 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:38.505 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:38.505 02:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:38.764 [2024-07-11 02:31:29.031129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:38.764 [2024-07-11 02:31:29.032457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:38.764 [2024-07-11 02:31:29.032510] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:38.764 [2024-07-11 02:31:29.032553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:38.764 [2024-07-11 02:31:29.032724] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2545cc0 00:25:38.764 [2024-07-11 02:31:29.032735] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:38.764 [2024-07-11 02:31:29.032950] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23afe10 00:25:38.764 [2024-07-11 02:31:29.033102] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2545cc0 00:25:38.764 [2024-07-11 02:31:29.033113] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2545cc0 00:25:38.764 [2024-07-11 02:31:29.033210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.764 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.024 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.024 "name": "raid_bdev1", 00:25:39.024 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:39.024 "strip_size_kb": 0, 00:25:39.024 "state": "online", 00:25:39.024 "raid_level": "raid1", 00:25:39.024 "superblock": true, 00:25:39.024 "num_base_bdevs": 4, 00:25:39.024 "num_base_bdevs_discovered": 4, 00:25:39.024 "num_base_bdevs_operational": 4, 00:25:39.024 "base_bdevs_list": [ 00:25:39.024 { 00:25:39.024 "name": "pt1", 00:25:39.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:39.024 "is_configured": true, 00:25:39.024 "data_offset": 2048, 00:25:39.024 "data_size": 63488 00:25:39.024 }, 00:25:39.024 { 00:25:39.024 "name": "pt2", 00:25:39.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:39.024 "is_configured": true, 00:25:39.024 "data_offset": 2048, 00:25:39.024 "data_size": 63488 00:25:39.024 }, 00:25:39.024 { 00:25:39.024 "name": "pt3", 00:25:39.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:39.024 "is_configured": true, 00:25:39.024 "data_offset": 2048, 00:25:39.024 "data_size": 63488 00:25:39.024 }, 00:25:39.024 { 00:25:39.024 "name": "pt4", 00:25:39.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:39.024 "is_configured": true, 00:25:39.024 "data_offset": 2048, 00:25:39.024 "data_size": 63488 00:25:39.024 } 00:25:39.024 ] 00:25:39.024 }' 00:25:39.024 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.024 02:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:39.592 02:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:39.850 [2024-07-11 02:31:30.114282] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:39.850 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:39.850 "name": "raid_bdev1", 00:25:39.850 "aliases": [ 00:25:39.850 "013552e4-d503-4176-9387-5317e0e79d48" 00:25:39.850 ], 00:25:39.850 "product_name": "Raid Volume", 00:25:39.850 "block_size": 512, 00:25:39.850 "num_blocks": 63488, 00:25:39.850 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:39.850 "assigned_rate_limits": { 00:25:39.850 "rw_ios_per_sec": 0, 00:25:39.850 "rw_mbytes_per_sec": 0, 00:25:39.850 "r_mbytes_per_sec": 0, 00:25:39.850 "w_mbytes_per_sec": 0 00:25:39.850 }, 00:25:39.850 "claimed": false, 00:25:39.850 "zoned": false, 00:25:39.850 "supported_io_types": { 00:25:39.850 "read": true, 00:25:39.850 "write": true, 00:25:39.850 "unmap": false, 00:25:39.850 "flush": false, 00:25:39.850 "reset": true, 00:25:39.850 "nvme_admin": false, 00:25:39.850 "nvme_io": false, 00:25:39.850 "nvme_io_md": false, 00:25:39.850 "write_zeroes": true, 00:25:39.851 "zcopy": false, 00:25:39.851 "get_zone_info": false, 00:25:39.851 "zone_management": false, 00:25:39.851 "zone_append": false, 00:25:39.851 "compare": false, 00:25:39.851 "compare_and_write": false, 00:25:39.851 "abort": false, 00:25:39.851 "seek_hole": false, 00:25:39.851 "seek_data": false, 00:25:39.851 "copy": false, 00:25:39.851 "nvme_iov_md": false 00:25:39.851 }, 00:25:39.851 "memory_domains": [ 00:25:39.851 { 00:25:39.851 "dma_device_id": "system", 00:25:39.851 "dma_device_type": 1 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.851 "dma_device_type": 2 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "dma_device_id": "system", 00:25:39.851 "dma_device_type": 1 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.851 "dma_device_type": 2 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "dma_device_id": "system", 00:25:39.851 "dma_device_type": 1 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.851 "dma_device_type": 2 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "dma_device_id": "system", 00:25:39.851 "dma_device_type": 1 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.851 "dma_device_type": 2 00:25:39.851 } 00:25:39.851 ], 00:25:39.851 "driver_specific": { 00:25:39.851 "raid": { 00:25:39.851 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:39.851 "strip_size_kb": 0, 00:25:39.851 "state": "online", 00:25:39.851 "raid_level": "raid1", 00:25:39.851 "superblock": true, 00:25:39.851 "num_base_bdevs": 4, 00:25:39.851 "num_base_bdevs_discovered": 4, 00:25:39.851 "num_base_bdevs_operational": 4, 00:25:39.851 "base_bdevs_list": [ 00:25:39.851 { 00:25:39.851 "name": "pt1", 00:25:39.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:39.851 "is_configured": true, 00:25:39.851 "data_offset": 2048, 00:25:39.851 "data_size": 63488 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "name": "pt2", 00:25:39.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:39.851 "is_configured": true, 00:25:39.851 "data_offset": 2048, 00:25:39.851 "data_size": 63488 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "name": "pt3", 00:25:39.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:39.851 "is_configured": true, 00:25:39.851 "data_offset": 2048, 00:25:39.851 "data_size": 63488 00:25:39.851 }, 00:25:39.851 { 00:25:39.851 "name": "pt4", 00:25:39.851 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:39.851 "is_configured": true, 00:25:39.851 "data_offset": 2048, 00:25:39.851 "data_size": 63488 00:25:39.851 } 00:25:39.851 ] 00:25:39.851 } 00:25:39.851 } 00:25:39.851 }' 00:25:39.851 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:39.851 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:39.851 pt2 00:25:39.851 pt3 00:25:39.851 pt4' 00:25:39.851 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:39.851 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:39.851 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:40.109 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:40.109 "name": "pt1", 00:25:40.109 "aliases": [ 00:25:40.109 "00000000-0000-0000-0000-000000000001" 00:25:40.109 ], 00:25:40.109 "product_name": "passthru", 00:25:40.109 "block_size": 512, 00:25:40.109 "num_blocks": 65536, 00:25:40.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:40.109 "assigned_rate_limits": { 00:25:40.109 "rw_ios_per_sec": 0, 00:25:40.109 "rw_mbytes_per_sec": 0, 00:25:40.109 "r_mbytes_per_sec": 0, 00:25:40.109 "w_mbytes_per_sec": 0 00:25:40.109 }, 00:25:40.109 "claimed": true, 00:25:40.109 "claim_type": "exclusive_write", 00:25:40.109 "zoned": false, 00:25:40.109 "supported_io_types": { 00:25:40.109 "read": true, 00:25:40.109 "write": true, 00:25:40.109 "unmap": true, 00:25:40.109 "flush": true, 00:25:40.109 "reset": true, 00:25:40.109 "nvme_admin": false, 00:25:40.109 "nvme_io": false, 00:25:40.109 "nvme_io_md": false, 00:25:40.109 "write_zeroes": true, 00:25:40.109 "zcopy": true, 00:25:40.109 "get_zone_info": false, 00:25:40.109 "zone_management": false, 00:25:40.109 "zone_append": false, 00:25:40.109 "compare": false, 00:25:40.109 "compare_and_write": false, 00:25:40.109 "abort": true, 00:25:40.109 "seek_hole": false, 00:25:40.109 "seek_data": false, 00:25:40.109 "copy": true, 00:25:40.109 "nvme_iov_md": false 00:25:40.109 }, 00:25:40.109 "memory_domains": [ 00:25:40.109 { 00:25:40.109 "dma_device_id": "system", 00:25:40.109 "dma_device_type": 1 00:25:40.109 }, 00:25:40.109 { 00:25:40.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.109 "dma_device_type": 2 00:25:40.109 } 00:25:40.109 ], 00:25:40.109 "driver_specific": { 00:25:40.109 "passthru": { 00:25:40.109 "name": "pt1", 00:25:40.109 "base_bdev_name": "malloc1" 00:25:40.109 } 00:25:40.109 } 00:25:40.109 }' 00:25:40.109 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.109 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.109 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:40.109 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:40.368 02:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:40.626 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:40.626 "name": "pt2", 00:25:40.626 "aliases": [ 00:25:40.626 "00000000-0000-0000-0000-000000000002" 00:25:40.626 ], 00:25:40.626 "product_name": "passthru", 00:25:40.626 "block_size": 512, 00:25:40.626 "num_blocks": 65536, 00:25:40.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:40.626 "assigned_rate_limits": { 00:25:40.626 "rw_ios_per_sec": 0, 00:25:40.626 "rw_mbytes_per_sec": 0, 00:25:40.626 "r_mbytes_per_sec": 0, 00:25:40.626 "w_mbytes_per_sec": 0 00:25:40.626 }, 00:25:40.626 "claimed": true, 00:25:40.626 "claim_type": "exclusive_write", 00:25:40.626 "zoned": false, 00:25:40.626 "supported_io_types": { 00:25:40.626 "read": true, 00:25:40.626 "write": true, 00:25:40.626 "unmap": true, 00:25:40.626 "flush": true, 00:25:40.626 "reset": true, 00:25:40.626 "nvme_admin": false, 00:25:40.626 "nvme_io": false, 00:25:40.626 "nvme_io_md": false, 00:25:40.626 "write_zeroes": true, 00:25:40.627 "zcopy": true, 00:25:40.627 "get_zone_info": false, 00:25:40.627 "zone_management": false, 00:25:40.627 "zone_append": false, 00:25:40.627 "compare": false, 00:25:40.627 "compare_and_write": false, 00:25:40.627 "abort": true, 00:25:40.627 "seek_hole": false, 00:25:40.627 "seek_data": false, 00:25:40.627 "copy": true, 00:25:40.627 "nvme_iov_md": false 00:25:40.627 }, 00:25:40.627 "memory_domains": [ 00:25:40.627 { 00:25:40.627 "dma_device_id": "system", 00:25:40.627 "dma_device_type": 1 00:25:40.627 }, 00:25:40.627 { 00:25:40.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.627 "dma_device_type": 2 00:25:40.627 } 00:25:40.627 ], 00:25:40.627 "driver_specific": { 00:25:40.627 "passthru": { 00:25:40.627 "name": "pt2", 00:25:40.627 "base_bdev_name": "malloc2" 00:25:40.627 } 00:25:40.627 } 00:25:40.627 }' 00:25:40.627 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.627 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:40.885 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.144 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.144 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.144 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:41.144 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:41.144 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:41.403 "name": "pt3", 00:25:41.403 "aliases": [ 00:25:41.403 "00000000-0000-0000-0000-000000000003" 00:25:41.403 ], 00:25:41.403 "product_name": "passthru", 00:25:41.403 "block_size": 512, 00:25:41.403 "num_blocks": 65536, 00:25:41.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:41.403 "assigned_rate_limits": { 00:25:41.403 "rw_ios_per_sec": 0, 00:25:41.403 "rw_mbytes_per_sec": 0, 00:25:41.403 "r_mbytes_per_sec": 0, 00:25:41.403 "w_mbytes_per_sec": 0 00:25:41.403 }, 00:25:41.403 "claimed": true, 00:25:41.403 "claim_type": "exclusive_write", 00:25:41.403 "zoned": false, 00:25:41.403 "supported_io_types": { 00:25:41.403 "read": true, 00:25:41.403 "write": true, 00:25:41.403 "unmap": true, 00:25:41.403 "flush": true, 00:25:41.403 "reset": true, 00:25:41.403 "nvme_admin": false, 00:25:41.403 "nvme_io": false, 00:25:41.403 "nvme_io_md": false, 00:25:41.403 "write_zeroes": true, 00:25:41.403 "zcopy": true, 00:25:41.403 "get_zone_info": false, 00:25:41.403 "zone_management": false, 00:25:41.403 "zone_append": false, 00:25:41.403 "compare": false, 00:25:41.403 "compare_and_write": false, 00:25:41.403 "abort": true, 00:25:41.403 "seek_hole": false, 00:25:41.403 "seek_data": false, 00:25:41.403 "copy": true, 00:25:41.403 "nvme_iov_md": false 00:25:41.403 }, 00:25:41.403 "memory_domains": [ 00:25:41.403 { 00:25:41.403 "dma_device_id": "system", 00:25:41.403 "dma_device_type": 1 00:25:41.403 }, 00:25:41.403 { 00:25:41.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.403 "dma_device_type": 2 00:25:41.403 } 00:25:41.403 ], 00:25:41.403 "driver_specific": { 00:25:41.403 "passthru": { 00:25:41.403 "name": "pt3", 00:25:41.403 "base_bdev_name": "malloc3" 00:25:41.403 } 00:25:41.403 } 00:25:41.403 }' 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:41.403 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:41.662 02:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:41.920 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:41.920 "name": "pt4", 00:25:41.920 "aliases": [ 00:25:41.920 "00000000-0000-0000-0000-000000000004" 00:25:41.920 ], 00:25:41.920 "product_name": "passthru", 00:25:41.920 "block_size": 512, 00:25:41.921 "num_blocks": 65536, 00:25:41.921 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:41.921 "assigned_rate_limits": { 00:25:41.921 "rw_ios_per_sec": 0, 00:25:41.921 "rw_mbytes_per_sec": 0, 00:25:41.921 "r_mbytes_per_sec": 0, 00:25:41.921 "w_mbytes_per_sec": 0 00:25:41.921 }, 00:25:41.921 "claimed": true, 00:25:41.921 "claim_type": "exclusive_write", 00:25:41.921 "zoned": false, 00:25:41.921 "supported_io_types": { 00:25:41.921 "read": true, 00:25:41.921 "write": true, 00:25:41.921 "unmap": true, 00:25:41.921 "flush": true, 00:25:41.921 "reset": true, 00:25:41.921 "nvme_admin": false, 00:25:41.921 "nvme_io": false, 00:25:41.921 "nvme_io_md": false, 00:25:41.921 "write_zeroes": true, 00:25:41.921 "zcopy": true, 00:25:41.921 "get_zone_info": false, 00:25:41.921 "zone_management": false, 00:25:41.921 "zone_append": false, 00:25:41.921 "compare": false, 00:25:41.921 "compare_and_write": false, 00:25:41.921 "abort": true, 00:25:41.921 "seek_hole": false, 00:25:41.921 "seek_data": false, 00:25:41.921 "copy": true, 00:25:41.921 "nvme_iov_md": false 00:25:41.921 }, 00:25:41.921 "memory_domains": [ 00:25:41.921 { 00:25:41.921 "dma_device_id": "system", 00:25:41.921 "dma_device_type": 1 00:25:41.921 }, 00:25:41.921 { 00:25:41.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.921 "dma_device_type": 2 00:25:41.921 } 00:25:41.921 ], 00:25:41.921 "driver_specific": { 00:25:41.921 "passthru": { 00:25:41.921 "name": "pt4", 00:25:41.921 "base_bdev_name": "malloc4" 00:25:41.921 } 00:25:41.921 } 00:25:41.921 }' 00:25:41.921 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.921 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.180 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.438 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:42.438 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:42.438 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:42.697 [2024-07-11 02:31:32.921731] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.697 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=013552e4-d503-4176-9387-5317e0e79d48 00:25:42.697 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 013552e4-d503-4176-9387-5317e0e79d48 ']' 00:25:42.697 02:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:43.263 [2024-07-11 02:31:33.430773] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.263 [2024-07-11 02:31:33.430795] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.263 [2024-07-11 02:31:33.430845] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.263 [2024-07-11 02:31:33.430921] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.263 [2024-07-11 02:31:33.430933] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2545cc0 name raid_bdev1, state offline 00:25:43.263 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.263 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:43.522 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:43.522 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:43.522 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:43.522 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:43.780 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:43.780 02:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:43.780 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:43.780 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:44.039 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:44.039 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:44.297 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:44.297 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:44.556 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:44.556 02:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:44.556 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:44.557 02:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:44.816 [2024-07-11 02:31:35.171284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:44.816 [2024-07-11 02:31:35.172584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:44.816 [2024-07-11 02:31:35.172627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:44.816 [2024-07-11 02:31:35.172660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:44.816 [2024-07-11 02:31:35.172702] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:44.816 [2024-07-11 02:31:35.172739] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:44.816 [2024-07-11 02:31:35.172770] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:44.816 [2024-07-11 02:31:35.172793] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:44.816 [2024-07-11 02:31:35.172811] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:44.816 [2024-07-11 02:31:35.172822] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x239a550 name raid_bdev1, state configuring 00:25:44.816 request: 00:25:44.816 { 00:25:44.816 "name": "raid_bdev1", 00:25:44.816 "raid_level": "raid1", 00:25:44.816 "base_bdevs": [ 00:25:44.816 "malloc1", 00:25:44.816 "malloc2", 00:25:44.816 "malloc3", 00:25:44.816 "malloc4" 00:25:44.816 ], 00:25:44.816 "superblock": false, 00:25:44.816 "method": "bdev_raid_create", 00:25:44.816 "req_id": 1 00:25:44.816 } 00:25:44.816 Got JSON-RPC error response 00:25:44.816 response: 00:25:44.816 { 00:25:44.816 "code": -17, 00:25:44.816 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:44.816 } 00:25:44.816 02:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:44.816 02:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:44.816 02:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:44.816 02:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:44.816 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.816 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:45.075 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:45.075 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:45.075 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:45.334 [2024-07-11 02:31:35.664534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:45.334 [2024-07-11 02:31:35.664584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.334 [2024-07-11 02:31:35.664603] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25490d0 00:25:45.334 [2024-07-11 02:31:35.664616] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.334 [2024-07-11 02:31:35.666191] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.334 [2024-07-11 02:31:35.666226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:45.334 [2024-07-11 02:31:35.666289] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:45.334 [2024-07-11 02:31:35.666315] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:45.334 pt1 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.334 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.593 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.593 "name": "raid_bdev1", 00:25:45.593 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:45.593 "strip_size_kb": 0, 00:25:45.593 "state": "configuring", 00:25:45.593 "raid_level": "raid1", 00:25:45.593 "superblock": true, 00:25:45.593 "num_base_bdevs": 4, 00:25:45.593 "num_base_bdevs_discovered": 1, 00:25:45.593 "num_base_bdevs_operational": 4, 00:25:45.593 "base_bdevs_list": [ 00:25:45.593 { 00:25:45.593 "name": "pt1", 00:25:45.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:45.593 "is_configured": true, 00:25:45.593 "data_offset": 2048, 00:25:45.593 "data_size": 63488 00:25:45.593 }, 00:25:45.593 { 00:25:45.593 "name": null, 00:25:45.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:45.593 "is_configured": false, 00:25:45.593 "data_offset": 2048, 00:25:45.593 "data_size": 63488 00:25:45.593 }, 00:25:45.593 { 00:25:45.593 "name": null, 00:25:45.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:45.593 "is_configured": false, 00:25:45.593 "data_offset": 2048, 00:25:45.593 "data_size": 63488 00:25:45.593 }, 00:25:45.593 { 00:25:45.593 "name": null, 00:25:45.593 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:45.593 "is_configured": false, 00:25:45.593 "data_offset": 2048, 00:25:45.593 "data_size": 63488 00:25:45.593 } 00:25:45.593 ] 00:25:45.593 }' 00:25:45.593 02:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.593 02:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.160 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:46.160 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:46.419 [2024-07-11 02:31:36.699278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:46.419 [2024-07-11 02:31:36.699327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.419 [2024-07-11 02:31:36.699347] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x254b120 00:25:46.419 [2024-07-11 02:31:36.699359] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.419 [2024-07-11 02:31:36.699683] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.419 [2024-07-11 02:31:36.699702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:46.419 [2024-07-11 02:31:36.699773] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:46.419 [2024-07-11 02:31:36.699793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:46.419 pt2 00:25:46.419 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:46.678 [2024-07-11 02:31:36.887794] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.678 02:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.937 02:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.937 "name": "raid_bdev1", 00:25:46.937 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:46.937 "strip_size_kb": 0, 00:25:46.937 "state": "configuring", 00:25:46.937 "raid_level": "raid1", 00:25:46.937 "superblock": true, 00:25:46.937 "num_base_bdevs": 4, 00:25:46.937 "num_base_bdevs_discovered": 1, 00:25:46.937 "num_base_bdevs_operational": 4, 00:25:46.937 "base_bdevs_list": [ 00:25:46.937 { 00:25:46.937 "name": "pt1", 00:25:46.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:46.938 "is_configured": true, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 }, 00:25:46.938 { 00:25:46.938 "name": null, 00:25:46.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:46.938 "is_configured": false, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 }, 00:25:46.938 { 00:25:46.938 "name": null, 00:25:46.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:46.938 "is_configured": false, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 }, 00:25:46.938 { 00:25:46.938 "name": null, 00:25:46.938 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:46.938 "is_configured": false, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 } 00:25:46.938 ] 00:25:46.938 }' 00:25:46.938 02:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.938 02:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.511 02:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:47.511 02:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:47.512 02:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:47.771 [2024-07-11 02:31:38.030834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:47.771 [2024-07-11 02:31:38.030881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.771 [2024-07-11 02:31:38.030900] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x254bf10 00:25:47.771 [2024-07-11 02:31:38.030912] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.771 [2024-07-11 02:31:38.031244] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.771 [2024-07-11 02:31:38.031263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:47.771 [2024-07-11 02:31:38.031327] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:47.771 [2024-07-11 02:31:38.031347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:47.771 pt2 00:25:47.771 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:47.771 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:47.771 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:48.030 [2024-07-11 02:31:38.275476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:48.030 [2024-07-11 02:31:38.275514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.030 [2024-07-11 02:31:38.275530] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x254a680 00:25:48.030 [2024-07-11 02:31:38.275542] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.030 [2024-07-11 02:31:38.275852] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.030 [2024-07-11 02:31:38.275871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:48.030 [2024-07-11 02:31:38.275927] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:48.030 [2024-07-11 02:31:38.275945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:48.030 pt3 00:25:48.030 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:48.030 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:48.030 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:48.289 [2024-07-11 02:31:38.520124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:48.289 [2024-07-11 02:31:38.520157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.289 [2024-07-11 02:31:38.520172] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2549a40 00:25:48.289 [2024-07-11 02:31:38.520184] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.289 [2024-07-11 02:31:38.520478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.289 [2024-07-11 02:31:38.520496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:48.290 [2024-07-11 02:31:38.520547] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:48.290 [2024-07-11 02:31:38.520565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:48.290 [2024-07-11 02:31:38.520682] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2549fa0 00:25:48.290 [2024-07-11 02:31:38.520692] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:48.290 [2024-07-11 02:31:38.520869] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23afee0 00:25:48.290 [2024-07-11 02:31:38.521003] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2549fa0 00:25:48.290 [2024-07-11 02:31:38.521013] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2549fa0 00:25:48.290 [2024-07-11 02:31:38.521110] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.290 pt4 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.290 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.549 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.549 "name": "raid_bdev1", 00:25:48.549 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:48.549 "strip_size_kb": 0, 00:25:48.549 "state": "online", 00:25:48.549 "raid_level": "raid1", 00:25:48.549 "superblock": true, 00:25:48.549 "num_base_bdevs": 4, 00:25:48.549 "num_base_bdevs_discovered": 4, 00:25:48.549 "num_base_bdevs_operational": 4, 00:25:48.549 "base_bdevs_list": [ 00:25:48.549 { 00:25:48.549 "name": "pt1", 00:25:48.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:48.549 "is_configured": true, 00:25:48.549 "data_offset": 2048, 00:25:48.549 "data_size": 63488 00:25:48.549 }, 00:25:48.549 { 00:25:48.549 "name": "pt2", 00:25:48.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:48.549 "is_configured": true, 00:25:48.549 "data_offset": 2048, 00:25:48.549 "data_size": 63488 00:25:48.549 }, 00:25:48.549 { 00:25:48.549 "name": "pt3", 00:25:48.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:48.549 "is_configured": true, 00:25:48.549 "data_offset": 2048, 00:25:48.549 "data_size": 63488 00:25:48.549 }, 00:25:48.549 { 00:25:48.549 "name": "pt4", 00:25:48.549 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:48.549 "is_configured": true, 00:25:48.549 "data_offset": 2048, 00:25:48.549 "data_size": 63488 00:25:48.549 } 00:25:48.549 ] 00:25:48.549 }' 00:25:48.549 02:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.549 02:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:49.488 02:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:49.747 [2024-07-11 02:31:40.044493] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.747 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:49.747 "name": "raid_bdev1", 00:25:49.747 "aliases": [ 00:25:49.747 "013552e4-d503-4176-9387-5317e0e79d48" 00:25:49.747 ], 00:25:49.747 "product_name": "Raid Volume", 00:25:49.747 "block_size": 512, 00:25:49.747 "num_blocks": 63488, 00:25:49.747 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:49.747 "assigned_rate_limits": { 00:25:49.747 "rw_ios_per_sec": 0, 00:25:49.747 "rw_mbytes_per_sec": 0, 00:25:49.747 "r_mbytes_per_sec": 0, 00:25:49.747 "w_mbytes_per_sec": 0 00:25:49.747 }, 00:25:49.747 "claimed": false, 00:25:49.747 "zoned": false, 00:25:49.747 "supported_io_types": { 00:25:49.747 "read": true, 00:25:49.747 "write": true, 00:25:49.747 "unmap": false, 00:25:49.747 "flush": false, 00:25:49.747 "reset": true, 00:25:49.747 "nvme_admin": false, 00:25:49.747 "nvme_io": false, 00:25:49.747 "nvme_io_md": false, 00:25:49.747 "write_zeroes": true, 00:25:49.747 "zcopy": false, 00:25:49.747 "get_zone_info": false, 00:25:49.747 "zone_management": false, 00:25:49.747 "zone_append": false, 00:25:49.747 "compare": false, 00:25:49.747 "compare_and_write": false, 00:25:49.747 "abort": false, 00:25:49.747 "seek_hole": false, 00:25:49.747 "seek_data": false, 00:25:49.747 "copy": false, 00:25:49.747 "nvme_iov_md": false 00:25:49.747 }, 00:25:49.747 "memory_domains": [ 00:25:49.747 { 00:25:49.747 "dma_device_id": "system", 00:25:49.747 "dma_device_type": 1 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.747 "dma_device_type": 2 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "dma_device_id": "system", 00:25:49.747 "dma_device_type": 1 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.747 "dma_device_type": 2 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "dma_device_id": "system", 00:25:49.747 "dma_device_type": 1 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.747 "dma_device_type": 2 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "dma_device_id": "system", 00:25:49.747 "dma_device_type": 1 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.747 "dma_device_type": 2 00:25:49.747 } 00:25:49.747 ], 00:25:49.747 "driver_specific": { 00:25:49.747 "raid": { 00:25:49.747 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:49.747 "strip_size_kb": 0, 00:25:49.747 "state": "online", 00:25:49.747 "raid_level": "raid1", 00:25:49.747 "superblock": true, 00:25:49.747 "num_base_bdevs": 4, 00:25:49.747 "num_base_bdevs_discovered": 4, 00:25:49.747 "num_base_bdevs_operational": 4, 00:25:49.747 "base_bdevs_list": [ 00:25:49.747 { 00:25:49.747 "name": "pt1", 00:25:49.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:49.747 "is_configured": true, 00:25:49.747 "data_offset": 2048, 00:25:49.747 "data_size": 63488 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "name": "pt2", 00:25:49.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:49.747 "is_configured": true, 00:25:49.747 "data_offset": 2048, 00:25:49.747 "data_size": 63488 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "name": "pt3", 00:25:49.747 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:49.747 "is_configured": true, 00:25:49.747 "data_offset": 2048, 00:25:49.747 "data_size": 63488 00:25:49.747 }, 00:25:49.747 { 00:25:49.747 "name": "pt4", 00:25:49.747 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:49.747 "is_configured": true, 00:25:49.747 "data_offset": 2048, 00:25:49.748 "data_size": 63488 00:25:49.748 } 00:25:49.748 ] 00:25:49.748 } 00:25:49.748 } 00:25:49.748 }' 00:25:49.748 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:49.748 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:49.748 pt2 00:25:49.748 pt3 00:25:49.748 pt4' 00:25:49.748 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.748 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:49.748 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.007 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.007 "name": "pt1", 00:25:50.007 "aliases": [ 00:25:50.007 "00000000-0000-0000-0000-000000000001" 00:25:50.007 ], 00:25:50.007 "product_name": "passthru", 00:25:50.007 "block_size": 512, 00:25:50.007 "num_blocks": 65536, 00:25:50.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:50.007 "assigned_rate_limits": { 00:25:50.007 "rw_ios_per_sec": 0, 00:25:50.007 "rw_mbytes_per_sec": 0, 00:25:50.007 "r_mbytes_per_sec": 0, 00:25:50.007 "w_mbytes_per_sec": 0 00:25:50.007 }, 00:25:50.007 "claimed": true, 00:25:50.007 "claim_type": "exclusive_write", 00:25:50.007 "zoned": false, 00:25:50.007 "supported_io_types": { 00:25:50.007 "read": true, 00:25:50.007 "write": true, 00:25:50.007 "unmap": true, 00:25:50.007 "flush": true, 00:25:50.007 "reset": true, 00:25:50.007 "nvme_admin": false, 00:25:50.007 "nvme_io": false, 00:25:50.007 "nvme_io_md": false, 00:25:50.007 "write_zeroes": true, 00:25:50.007 "zcopy": true, 00:25:50.007 "get_zone_info": false, 00:25:50.007 "zone_management": false, 00:25:50.007 "zone_append": false, 00:25:50.007 "compare": false, 00:25:50.007 "compare_and_write": false, 00:25:50.007 "abort": true, 00:25:50.007 "seek_hole": false, 00:25:50.007 "seek_data": false, 00:25:50.007 "copy": true, 00:25:50.007 "nvme_iov_md": false 00:25:50.007 }, 00:25:50.007 "memory_domains": [ 00:25:50.007 { 00:25:50.007 "dma_device_id": "system", 00:25:50.007 "dma_device_type": 1 00:25:50.007 }, 00:25:50.007 { 00:25:50.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.007 "dma_device_type": 2 00:25:50.007 } 00:25:50.007 ], 00:25:50.007 "driver_specific": { 00:25:50.007 "passthru": { 00:25:50.007 "name": "pt1", 00:25:50.007 "base_bdev_name": "malloc1" 00:25:50.007 } 00:25:50.007 } 00:25:50.007 }' 00:25:50.007 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.007 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.268 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.268 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.268 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.268 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.268 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.268 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.527 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.527 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.527 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.527 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.527 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.527 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:50.527 02:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.786 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.786 "name": "pt2", 00:25:50.786 "aliases": [ 00:25:50.786 "00000000-0000-0000-0000-000000000002" 00:25:50.786 ], 00:25:50.786 "product_name": "passthru", 00:25:50.786 "block_size": 512, 00:25:50.786 "num_blocks": 65536, 00:25:50.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:50.786 "assigned_rate_limits": { 00:25:50.786 "rw_ios_per_sec": 0, 00:25:50.787 "rw_mbytes_per_sec": 0, 00:25:50.787 "r_mbytes_per_sec": 0, 00:25:50.787 "w_mbytes_per_sec": 0 00:25:50.787 }, 00:25:50.787 "claimed": true, 00:25:50.787 "claim_type": "exclusive_write", 00:25:50.787 "zoned": false, 00:25:50.787 "supported_io_types": { 00:25:50.787 "read": true, 00:25:50.787 "write": true, 00:25:50.787 "unmap": true, 00:25:50.787 "flush": true, 00:25:50.787 "reset": true, 00:25:50.787 "nvme_admin": false, 00:25:50.787 "nvme_io": false, 00:25:50.787 "nvme_io_md": false, 00:25:50.787 "write_zeroes": true, 00:25:50.787 "zcopy": true, 00:25:50.787 "get_zone_info": false, 00:25:50.787 "zone_management": false, 00:25:50.787 "zone_append": false, 00:25:50.787 "compare": false, 00:25:50.787 "compare_and_write": false, 00:25:50.787 "abort": true, 00:25:50.787 "seek_hole": false, 00:25:50.787 "seek_data": false, 00:25:50.787 "copy": true, 00:25:50.787 "nvme_iov_md": false 00:25:50.787 }, 00:25:50.787 "memory_domains": [ 00:25:50.787 { 00:25:50.787 "dma_device_id": "system", 00:25:50.787 "dma_device_type": 1 00:25:50.787 }, 00:25:50.787 { 00:25:50.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.787 "dma_device_type": 2 00:25:50.787 } 00:25:50.787 ], 00:25:50.787 "driver_specific": { 00:25:50.787 "passthru": { 00:25:50.787 "name": "pt2", 00:25:50.787 "base_bdev_name": "malloc2" 00:25:50.787 } 00:25:50.787 } 00:25:50.787 }' 00:25:50.787 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.787 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.787 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.787 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.046 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.046 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:51.046 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.046 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.046 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:51.046 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.046 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.305 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:51.305 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:51.305 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:51.305 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:51.565 "name": "pt3", 00:25:51.565 "aliases": [ 00:25:51.565 "00000000-0000-0000-0000-000000000003" 00:25:51.565 ], 00:25:51.565 "product_name": "passthru", 00:25:51.565 "block_size": 512, 00:25:51.565 "num_blocks": 65536, 00:25:51.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:51.565 "assigned_rate_limits": { 00:25:51.565 "rw_ios_per_sec": 0, 00:25:51.565 "rw_mbytes_per_sec": 0, 00:25:51.565 "r_mbytes_per_sec": 0, 00:25:51.565 "w_mbytes_per_sec": 0 00:25:51.565 }, 00:25:51.565 "claimed": true, 00:25:51.565 "claim_type": "exclusive_write", 00:25:51.565 "zoned": false, 00:25:51.565 "supported_io_types": { 00:25:51.565 "read": true, 00:25:51.565 "write": true, 00:25:51.565 "unmap": true, 00:25:51.565 "flush": true, 00:25:51.565 "reset": true, 00:25:51.565 "nvme_admin": false, 00:25:51.565 "nvme_io": false, 00:25:51.565 "nvme_io_md": false, 00:25:51.565 "write_zeroes": true, 00:25:51.565 "zcopy": true, 00:25:51.565 "get_zone_info": false, 00:25:51.565 "zone_management": false, 00:25:51.565 "zone_append": false, 00:25:51.565 "compare": false, 00:25:51.565 "compare_and_write": false, 00:25:51.565 "abort": true, 00:25:51.565 "seek_hole": false, 00:25:51.565 "seek_data": false, 00:25:51.565 "copy": true, 00:25:51.565 "nvme_iov_md": false 00:25:51.565 }, 00:25:51.565 "memory_domains": [ 00:25:51.565 { 00:25:51.565 "dma_device_id": "system", 00:25:51.565 "dma_device_type": 1 00:25:51.565 }, 00:25:51.565 { 00:25:51.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.565 "dma_device_type": 2 00:25:51.565 } 00:25:51.565 ], 00:25:51.565 "driver_specific": { 00:25:51.565 "passthru": { 00:25:51.565 "name": "pt3", 00:25:51.565 "base_bdev_name": "malloc3" 00:25:51.565 } 00:25:51.565 } 00:25:51.565 }' 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.565 02:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.824 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:51.824 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.824 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.824 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:51.824 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:51.824 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:51.824 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:52.084 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:52.084 "name": "pt4", 00:25:52.084 "aliases": [ 00:25:52.084 "00000000-0000-0000-0000-000000000004" 00:25:52.084 ], 00:25:52.084 "product_name": "passthru", 00:25:52.084 "block_size": 512, 00:25:52.084 "num_blocks": 65536, 00:25:52.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:52.084 "assigned_rate_limits": { 00:25:52.084 "rw_ios_per_sec": 0, 00:25:52.084 "rw_mbytes_per_sec": 0, 00:25:52.084 "r_mbytes_per_sec": 0, 00:25:52.084 "w_mbytes_per_sec": 0 00:25:52.084 }, 00:25:52.084 "claimed": true, 00:25:52.084 "claim_type": "exclusive_write", 00:25:52.084 "zoned": false, 00:25:52.084 "supported_io_types": { 00:25:52.084 "read": true, 00:25:52.084 "write": true, 00:25:52.084 "unmap": true, 00:25:52.084 "flush": true, 00:25:52.084 "reset": true, 00:25:52.084 "nvme_admin": false, 00:25:52.084 "nvme_io": false, 00:25:52.084 "nvme_io_md": false, 00:25:52.084 "write_zeroes": true, 00:25:52.084 "zcopy": true, 00:25:52.084 "get_zone_info": false, 00:25:52.084 "zone_management": false, 00:25:52.084 "zone_append": false, 00:25:52.084 "compare": false, 00:25:52.084 "compare_and_write": false, 00:25:52.084 "abort": true, 00:25:52.084 "seek_hole": false, 00:25:52.084 "seek_data": false, 00:25:52.084 "copy": true, 00:25:52.084 "nvme_iov_md": false 00:25:52.084 }, 00:25:52.084 "memory_domains": [ 00:25:52.084 { 00:25:52.084 "dma_device_id": "system", 00:25:52.084 "dma_device_type": 1 00:25:52.084 }, 00:25:52.084 { 00:25:52.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.084 "dma_device_type": 2 00:25:52.084 } 00:25:52.084 ], 00:25:52.084 "driver_specific": { 00:25:52.084 "passthru": { 00:25:52.084 "name": "pt4", 00:25:52.084 "base_bdev_name": "malloc4" 00:25:52.084 } 00:25:52.084 } 00:25:52.084 }' 00:25:52.084 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:52.085 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:52.085 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:52.085 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:52.344 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:52.344 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:52.344 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:52.344 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:52.344 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:52.344 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:52.603 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:52.603 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:52.603 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:52.603 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:52.863 [2024-07-11 02:31:43.048470] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:52.863 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 013552e4-d503-4176-9387-5317e0e79d48 '!=' 013552e4-d503-4176-9387-5317e0e79d48 ']' 00:25:52.863 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:25:52.863 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:52.863 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:52.863 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:53.123 [2024-07-11 02:31:43.300865] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.123 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.691 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.691 "name": "raid_bdev1", 00:25:53.691 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:53.691 "strip_size_kb": 0, 00:25:53.691 "state": "online", 00:25:53.691 "raid_level": "raid1", 00:25:53.691 "superblock": true, 00:25:53.691 "num_base_bdevs": 4, 00:25:53.691 "num_base_bdevs_discovered": 3, 00:25:53.691 "num_base_bdevs_operational": 3, 00:25:53.691 "base_bdevs_list": [ 00:25:53.691 { 00:25:53.691 "name": null, 00:25:53.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.691 "is_configured": false, 00:25:53.691 "data_offset": 2048, 00:25:53.691 "data_size": 63488 00:25:53.691 }, 00:25:53.691 { 00:25:53.691 "name": "pt2", 00:25:53.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:53.691 "is_configured": true, 00:25:53.691 "data_offset": 2048, 00:25:53.691 "data_size": 63488 00:25:53.691 }, 00:25:53.691 { 00:25:53.691 "name": "pt3", 00:25:53.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:53.691 "is_configured": true, 00:25:53.691 "data_offset": 2048, 00:25:53.691 "data_size": 63488 00:25:53.691 }, 00:25:53.691 { 00:25:53.691 "name": "pt4", 00:25:53.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:53.691 "is_configured": true, 00:25:53.691 "data_offset": 2048, 00:25:53.691 "data_size": 63488 00:25:53.691 } 00:25:53.691 ] 00:25:53.691 }' 00:25:53.691 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.691 02:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.258 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:54.258 [2024-07-11 02:31:44.600296] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:54.258 [2024-07-11 02:31:44.600328] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:54.258 [2024-07-11 02:31:44.600379] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.258 [2024-07-11 02:31:44.600452] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.258 [2024-07-11 02:31:44.600464] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2549fa0 name raid_bdev1, state offline 00:25:54.258 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.258 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:25:54.516 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:25:54.517 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:25:54.517 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:25:54.517 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:54.517 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:54.777 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:54.777 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:54.777 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:54.777 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:54.777 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:54.777 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:55.036 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:55.036 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:55.036 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:25:55.036 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:55.036 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:55.295 [2024-07-11 02:31:45.582840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:55.295 [2024-07-11 02:31:45.582883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.295 [2024-07-11 02:31:45.582904] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25425b0 00:25:55.295 [2024-07-11 02:31:45.582916] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.295 [2024-07-11 02:31:45.584450] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.295 [2024-07-11 02:31:45.584478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:55.295 [2024-07-11 02:31:45.584539] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:55.295 [2024-07-11 02:31:45.584564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:55.295 pt2 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.295 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.554 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.554 "name": "raid_bdev1", 00:25:55.554 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:55.554 "strip_size_kb": 0, 00:25:55.554 "state": "configuring", 00:25:55.554 "raid_level": "raid1", 00:25:55.554 "superblock": true, 00:25:55.554 "num_base_bdevs": 4, 00:25:55.554 "num_base_bdevs_discovered": 1, 00:25:55.554 "num_base_bdevs_operational": 3, 00:25:55.554 "base_bdevs_list": [ 00:25:55.554 { 00:25:55.554 "name": null, 00:25:55.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.554 "is_configured": false, 00:25:55.554 "data_offset": 2048, 00:25:55.554 "data_size": 63488 00:25:55.554 }, 00:25:55.554 { 00:25:55.554 "name": "pt2", 00:25:55.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:55.554 "is_configured": true, 00:25:55.554 "data_offset": 2048, 00:25:55.554 "data_size": 63488 00:25:55.554 }, 00:25:55.554 { 00:25:55.554 "name": null, 00:25:55.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:55.554 "is_configured": false, 00:25:55.554 "data_offset": 2048, 00:25:55.554 "data_size": 63488 00:25:55.554 }, 00:25:55.554 { 00:25:55.554 "name": null, 00:25:55.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:55.554 "is_configured": false, 00:25:55.554 "data_offset": 2048, 00:25:55.554 "data_size": 63488 00:25:55.554 } 00:25:55.554 ] 00:25:55.554 }' 00:25:55.554 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.554 02:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.123 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:25:56.123 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:56.123 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:56.692 [2024-07-11 02:31:46.886480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:56.692 [2024-07-11 02:31:46.886528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.692 [2024-07-11 02:31:46.886549] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25460e0 00:25:56.692 [2024-07-11 02:31:46.886561] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.693 [2024-07-11 02:31:46.886898] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.693 [2024-07-11 02:31:46.886915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:56.693 [2024-07-11 02:31:46.886976] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:56.693 [2024-07-11 02:31:46.886996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:56.693 pt3 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.693 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.952 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.952 "name": "raid_bdev1", 00:25:56.952 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:56.952 "strip_size_kb": 0, 00:25:56.952 "state": "configuring", 00:25:56.952 "raid_level": "raid1", 00:25:56.952 "superblock": true, 00:25:56.952 "num_base_bdevs": 4, 00:25:56.952 "num_base_bdevs_discovered": 2, 00:25:56.952 "num_base_bdevs_operational": 3, 00:25:56.952 "base_bdevs_list": [ 00:25:56.952 { 00:25:56.952 "name": null, 00:25:56.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.952 "is_configured": false, 00:25:56.952 "data_offset": 2048, 00:25:56.952 "data_size": 63488 00:25:56.952 }, 00:25:56.952 { 00:25:56.952 "name": "pt2", 00:25:56.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:56.952 "is_configured": true, 00:25:56.952 "data_offset": 2048, 00:25:56.952 "data_size": 63488 00:25:56.952 }, 00:25:56.952 { 00:25:56.952 "name": "pt3", 00:25:56.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:56.952 "is_configured": true, 00:25:56.952 "data_offset": 2048, 00:25:56.952 "data_size": 63488 00:25:56.952 }, 00:25:56.952 { 00:25:56.952 "name": null, 00:25:56.952 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:56.952 "is_configured": false, 00:25:56.952 "data_offset": 2048, 00:25:56.952 "data_size": 63488 00:25:56.952 } 00:25:56.952 ] 00:25:56.952 }' 00:25:56.952 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.952 02:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.521 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:25:57.522 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:57.522 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:25:57.522 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:57.780 [2024-07-11 02:31:48.065630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:57.780 [2024-07-11 02:31:48.065678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.780 [2024-07-11 02:31:48.065698] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23984f0 00:25:57.780 [2024-07-11 02:31:48.065710] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.781 [2024-07-11 02:31:48.066038] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.781 [2024-07-11 02:31:48.066056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:57.781 [2024-07-11 02:31:48.066115] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:57.781 [2024-07-11 02:31:48.066135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:57.781 [2024-07-11 02:31:48.066248] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2397ea0 00:25:57.781 [2024-07-11 02:31:48.066258] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:57.781 [2024-07-11 02:31:48.066434] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25389c0 00:25:57.781 [2024-07-11 02:31:48.066563] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2397ea0 00:25:57.781 [2024-07-11 02:31:48.066573] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2397ea0 00:25:57.781 [2024-07-11 02:31:48.066666] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.781 pt4 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.781 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.039 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.039 "name": "raid_bdev1", 00:25:58.039 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:58.039 "strip_size_kb": 0, 00:25:58.039 "state": "online", 00:25:58.039 "raid_level": "raid1", 00:25:58.039 "superblock": true, 00:25:58.039 "num_base_bdevs": 4, 00:25:58.039 "num_base_bdevs_discovered": 3, 00:25:58.039 "num_base_bdevs_operational": 3, 00:25:58.039 "base_bdevs_list": [ 00:25:58.039 { 00:25:58.039 "name": null, 00:25:58.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.039 "is_configured": false, 00:25:58.039 "data_offset": 2048, 00:25:58.039 "data_size": 63488 00:25:58.039 }, 00:25:58.039 { 00:25:58.039 "name": "pt2", 00:25:58.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:58.039 "is_configured": true, 00:25:58.039 "data_offset": 2048, 00:25:58.039 "data_size": 63488 00:25:58.039 }, 00:25:58.039 { 00:25:58.039 "name": "pt3", 00:25:58.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:58.039 "is_configured": true, 00:25:58.039 "data_offset": 2048, 00:25:58.039 "data_size": 63488 00:25:58.039 }, 00:25:58.039 { 00:25:58.039 "name": "pt4", 00:25:58.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:58.039 "is_configured": true, 00:25:58.039 "data_offset": 2048, 00:25:58.039 "data_size": 63488 00:25:58.039 } 00:25:58.039 ] 00:25:58.039 }' 00:25:58.039 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.039 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.604 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:58.862 [2024-07-11 02:31:49.104374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:58.862 [2024-07-11 02:31:49.104399] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:58.862 [2024-07-11 02:31:49.104453] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:58.862 [2024-07-11 02:31:49.104518] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:58.862 [2024-07-11 02:31:49.104530] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2397ea0 name raid_bdev1, state offline 00:25:58.862 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.862 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:25:59.120 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:25:59.120 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:25:59.120 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:25:59.120 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:25:59.120 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:59.379 [2024-07-11 02:31:49.714058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:59.379 [2024-07-11 02:31:49.714102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.379 [2024-07-11 02:31:49.714125] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2544c20 00:25:59.379 [2024-07-11 02:31:49.714138] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.379 [2024-07-11 02:31:49.715672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.379 [2024-07-11 02:31:49.715699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:59.379 [2024-07-11 02:31:49.715766] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:59.379 [2024-07-11 02:31:49.715792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:59.379 [2024-07-11 02:31:49.715887] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:59.379 [2024-07-11 02:31:49.715900] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:59.379 [2024-07-11 02:31:49.715913] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x254a5b0 name raid_bdev1, state configuring 00:25:59.379 [2024-07-11 02:31:49.715936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:59.379 [2024-07-11 02:31:49.716010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:59.379 pt1 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.379 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.638 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.638 "name": "raid_bdev1", 00:25:59.638 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:25:59.638 "strip_size_kb": 0, 00:25:59.638 "state": "configuring", 00:25:59.638 "raid_level": "raid1", 00:25:59.638 "superblock": true, 00:25:59.638 "num_base_bdevs": 4, 00:25:59.638 "num_base_bdevs_discovered": 2, 00:25:59.638 "num_base_bdevs_operational": 3, 00:25:59.638 "base_bdevs_list": [ 00:25:59.638 { 00:25:59.638 "name": null, 00:25:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.638 "is_configured": false, 00:25:59.638 "data_offset": 2048, 00:25:59.638 "data_size": 63488 00:25:59.638 }, 00:25:59.638 { 00:25:59.638 "name": "pt2", 00:25:59.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:59.638 "is_configured": true, 00:25:59.638 "data_offset": 2048, 00:25:59.638 "data_size": 63488 00:25:59.638 }, 00:25:59.638 { 00:25:59.638 "name": "pt3", 00:25:59.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:59.638 "is_configured": true, 00:25:59.638 "data_offset": 2048, 00:25:59.638 "data_size": 63488 00:25:59.638 }, 00:25:59.638 { 00:25:59.638 "name": null, 00:25:59.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:59.638 "is_configured": false, 00:25:59.638 "data_offset": 2048, 00:25:59.638 "data_size": 63488 00:25:59.638 } 00:25:59.638 ] 00:25:59.638 }' 00:25:59.638 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.638 02:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.204 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:00.204 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:00.463 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:26:00.463 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:00.723 [2024-07-11 02:31:50.937311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:00.723 [2024-07-11 02:31:50.937358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.723 [2024-07-11 02:31:50.937377] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2399b10 00:26:00.723 [2024-07-11 02:31:50.937389] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.723 [2024-07-11 02:31:50.937720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.723 [2024-07-11 02:31:50.937737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:00.723 [2024-07-11 02:31:50.937804] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:00.723 [2024-07-11 02:31:50.937825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:00.723 [2024-07-11 02:31:50.937937] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2399e00 00:26:00.723 [2024-07-11 02:31:50.937948] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:00.723 [2024-07-11 02:31:50.938119] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2430360 00:26:00.723 [2024-07-11 02:31:50.938250] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2399e00 00:26:00.723 [2024-07-11 02:31:50.938260] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2399e00 00:26:00.723 [2024-07-11 02:31:50.938357] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.723 pt4 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.723 02:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.994 02:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.994 "name": "raid_bdev1", 00:26:00.994 "uuid": "013552e4-d503-4176-9387-5317e0e79d48", 00:26:00.994 "strip_size_kb": 0, 00:26:00.994 "state": "online", 00:26:00.994 "raid_level": "raid1", 00:26:00.994 "superblock": true, 00:26:00.994 "num_base_bdevs": 4, 00:26:00.994 "num_base_bdevs_discovered": 3, 00:26:00.994 "num_base_bdevs_operational": 3, 00:26:00.994 "base_bdevs_list": [ 00:26:00.994 { 00:26:00.994 "name": null, 00:26:00.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.994 "is_configured": false, 00:26:00.994 "data_offset": 2048, 00:26:00.994 "data_size": 63488 00:26:00.994 }, 00:26:00.994 { 00:26:00.994 "name": "pt2", 00:26:00.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:00.994 "is_configured": true, 00:26:00.994 "data_offset": 2048, 00:26:00.994 "data_size": 63488 00:26:00.994 }, 00:26:00.994 { 00:26:00.994 "name": "pt3", 00:26:00.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:00.994 "is_configured": true, 00:26:00.994 "data_offset": 2048, 00:26:00.994 "data_size": 63488 00:26:00.994 }, 00:26:00.994 { 00:26:00.994 "name": "pt4", 00:26:00.994 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:00.994 "is_configured": true, 00:26:00.994 "data_offset": 2048, 00:26:00.994 "data_size": 63488 00:26:00.994 } 00:26:00.994 ] 00:26:00.994 }' 00:26:00.994 02:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.994 02:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.591 02:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:01.591 02:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:01.855 02:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:26:01.855 02:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:01.855 02:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:26:01.855 [2024-07-11 02:31:52.277158] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 013552e4-d503-4176-9387-5317e0e79d48 '!=' 013552e4-d503-4176-9387-5317e0e79d48 ']' 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1999343 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1999343 ']' 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1999343 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1999343 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1999343' 00:26:02.114 killing process with pid 1999343 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1999343 00:26:02.114 [2024-07-11 02:31:52.347858] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:02.114 [2024-07-11 02:31:52.347914] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:02.114 [2024-07-11 02:31:52.347981] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:02.114 [2024-07-11 02:31:52.347993] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2399e00 name raid_bdev1, state offline 00:26:02.114 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1999343 00:26:02.114 [2024-07-11 02:31:52.387217] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:02.374 02:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:26:02.374 00:26:02.374 real 0m26.159s 00:26:02.374 user 0m48.247s 00:26:02.374 sys 0m4.736s 00:26:02.374 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.374 02:31:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.374 ************************************ 00:26:02.374 END TEST raid_superblock_test 00:26:02.374 ************************************ 00:26:02.374 02:31:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:02.374 02:31:52 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:02.374 02:31:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:02.374 02:31:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.374 02:31:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:02.374 ************************************ 00:26:02.374 START TEST raid_read_error_test 00:26:02.374 ************************************ 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.irvSHxuNeD 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2005234 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2005234 /var/tmp/spdk-raid.sock 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2005234 ']' 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:02.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.374 02:31:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.374 [2024-07-11 02:31:52.747780] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:26:02.374 [2024-07-11 02:31:52.747842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005234 ] 00:26:02.634 [2024-07-11 02:31:52.885618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.634 [2024-07-11 02:31:52.939128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.634 [2024-07-11 02:31:53.012253] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.634 [2024-07-11 02:31:53.012289] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:03.571 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.571 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:03.571 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:03.571 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:03.571 BaseBdev1_malloc 00:26:03.571 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:03.829 true 00:26:03.829 02:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:04.088 [2024-07-11 02:31:54.403547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:04.088 [2024-07-11 02:31:54.403591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.088 [2024-07-11 02:31:54.403620] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xddb330 00:26:04.088 [2024-07-11 02:31:54.403637] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.088 [2024-07-11 02:31:54.405582] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.088 [2024-07-11 02:31:54.405615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:04.088 BaseBdev1 00:26:04.088 02:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:04.088 02:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:04.346 BaseBdev2_malloc 00:26:04.346 02:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:04.606 true 00:26:04.606 02:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:04.865 [2024-07-11 02:31:55.093865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:04.865 [2024-07-11 02:31:55.093908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.865 [2024-07-11 02:31:55.093934] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdd4b40 00:26:04.865 [2024-07-11 02:31:55.093950] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.865 [2024-07-11 02:31:55.095569] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.865 [2024-07-11 02:31:55.095599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:04.865 BaseBdev2 00:26:04.865 02:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:04.865 02:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:05.125 BaseBdev3_malloc 00:26:05.125 02:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:05.385 true 00:26:05.385 02:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:05.644 [2024-07-11 02:31:55.821466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:05.644 [2024-07-11 02:31:55.821512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.644 [2024-07-11 02:31:55.821538] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdd80f0 00:26:05.644 [2024-07-11 02:31:55.821555] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.644 [2024-07-11 02:31:55.823175] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.644 [2024-07-11 02:31:55.823208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:05.644 BaseBdev3 00:26:05.644 02:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:05.644 02:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:05.903 BaseBdev4_malloc 00:26:05.903 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:05.903 true 00:26:05.903 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:06.163 [2024-07-11 02:31:56.483664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:06.163 [2024-07-11 02:31:56.483706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.163 [2024-07-11 02:31:56.483734] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc294c0 00:26:06.163 [2024-07-11 02:31:56.483749] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.163 [2024-07-11 02:31:56.485194] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.163 [2024-07-11 02:31:56.485224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:06.163 BaseBdev4 00:26:06.163 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:06.424 [2024-07-11 02:31:56.728350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.424 [2024-07-11 02:31:56.729699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:06.424 [2024-07-11 02:31:56.729773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:06.424 [2024-07-11 02:31:56.729834] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:06.424 [2024-07-11 02:31:56.730059] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xdd0a20 00:26:06.424 [2024-07-11 02:31:56.730071] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:06.424 [2024-07-11 02:31:56.730266] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdd09f0 00:26:06.424 [2024-07-11 02:31:56.730427] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdd0a20 00:26:06.424 [2024-07-11 02:31:56.730437] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xdd0a20 00:26:06.424 [2024-07-11 02:31:56.730538] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.424 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.685 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.685 "name": "raid_bdev1", 00:26:06.685 "uuid": "7bc7d0fb-6d79-40dd-b69b-b3920506d2e6", 00:26:06.685 "strip_size_kb": 0, 00:26:06.685 "state": "online", 00:26:06.685 "raid_level": "raid1", 00:26:06.685 "superblock": true, 00:26:06.685 "num_base_bdevs": 4, 00:26:06.685 "num_base_bdevs_discovered": 4, 00:26:06.685 "num_base_bdevs_operational": 4, 00:26:06.685 "base_bdevs_list": [ 00:26:06.685 { 00:26:06.685 "name": "BaseBdev1", 00:26:06.685 "uuid": "29729083-0415-56eb-9344-09237e513a2a", 00:26:06.685 "is_configured": true, 00:26:06.685 "data_offset": 2048, 00:26:06.685 "data_size": 63488 00:26:06.685 }, 00:26:06.685 { 00:26:06.685 "name": "BaseBdev2", 00:26:06.685 "uuid": "8f599c76-60b4-52a2-a68c-608d281f779e", 00:26:06.685 "is_configured": true, 00:26:06.685 "data_offset": 2048, 00:26:06.685 "data_size": 63488 00:26:06.685 }, 00:26:06.685 { 00:26:06.685 "name": "BaseBdev3", 00:26:06.685 "uuid": "4e59c777-80f9-5265-8285-3b92762e10e8", 00:26:06.685 "is_configured": true, 00:26:06.685 "data_offset": 2048, 00:26:06.685 "data_size": 63488 00:26:06.685 }, 00:26:06.685 { 00:26:06.685 "name": "BaseBdev4", 00:26:06.685 "uuid": "5db8c9ff-5daa-5d21-92d3-1b611758cb62", 00:26:06.685 "is_configured": true, 00:26:06.685 "data_offset": 2048, 00:26:06.685 "data_size": 63488 00:26:06.685 } 00:26:06.685 ] 00:26:06.685 }' 00:26:06.685 02:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.685 02:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.255 02:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:07.255 02:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:07.514 [2024-07-11 02:31:57.715199] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdd0930 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.453 02:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.713 02:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.713 "name": "raid_bdev1", 00:26:08.713 "uuid": "7bc7d0fb-6d79-40dd-b69b-b3920506d2e6", 00:26:08.713 "strip_size_kb": 0, 00:26:08.713 "state": "online", 00:26:08.713 "raid_level": "raid1", 00:26:08.713 "superblock": true, 00:26:08.713 "num_base_bdevs": 4, 00:26:08.713 "num_base_bdevs_discovered": 4, 00:26:08.713 "num_base_bdevs_operational": 4, 00:26:08.713 "base_bdevs_list": [ 00:26:08.713 { 00:26:08.713 "name": "BaseBdev1", 00:26:08.713 "uuid": "29729083-0415-56eb-9344-09237e513a2a", 00:26:08.713 "is_configured": true, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 }, 00:26:08.713 { 00:26:08.713 "name": "BaseBdev2", 00:26:08.713 "uuid": "8f599c76-60b4-52a2-a68c-608d281f779e", 00:26:08.713 "is_configured": true, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 }, 00:26:08.713 { 00:26:08.713 "name": "BaseBdev3", 00:26:08.713 "uuid": "4e59c777-80f9-5265-8285-3b92762e10e8", 00:26:08.713 "is_configured": true, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 }, 00:26:08.713 { 00:26:08.713 "name": "BaseBdev4", 00:26:08.713 "uuid": "5db8c9ff-5daa-5d21-92d3-1b611758cb62", 00:26:08.713 "is_configured": true, 00:26:08.713 "data_offset": 2048, 00:26:08.713 "data_size": 63488 00:26:08.713 } 00:26:08.713 ] 00:26:08.713 }' 00:26:08.713 02:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.713 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.282 02:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:09.542 [2024-07-11 02:31:59.809491] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:09.542 [2024-07-11 02:31:59.809533] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:09.542 [2024-07-11 02:31:59.812779] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.542 [2024-07-11 02:31:59.812822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.542 [2024-07-11 02:31:59.812936] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.542 [2024-07-11 02:31:59.812947] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdd0a20 name raid_bdev1, state offline 00:26:09.542 0 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2005234 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2005234 ']' 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2005234 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005234 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005234' 00:26:09.542 killing process with pid 2005234 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2005234 00:26:09.542 [2024-07-11 02:31:59.877474] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:09.542 02:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2005234 00:26:09.542 [2024-07-11 02:31:59.906804] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.irvSHxuNeD 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:09.801 00:26:09.801 real 0m7.433s 00:26:09.801 user 0m11.830s 00:26:09.801 sys 0m1.375s 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.801 02:32:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.801 ************************************ 00:26:09.801 END TEST raid_read_error_test 00:26:09.801 ************************************ 00:26:09.801 02:32:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:09.801 02:32:00 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:09.801 02:32:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:09.801 02:32:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.801 02:32:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.801 ************************************ 00:26:09.801 START TEST raid_write_error_test 00:26:09.801 ************************************ 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.igbcH1pHRY 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2006298 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2006298 /var/tmp/spdk-raid.sock 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2006298 ']' 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.801 02:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.060 [2024-07-11 02:32:00.266696] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:26:10.060 [2024-07-11 02:32:00.266769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006298 ] 00:26:10.060 [2024-07-11 02:32:00.401273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.060 [2024-07-11 02:32:00.450778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.320 [2024-07-11 02:32:00.508747] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.320 [2024-07-11 02:32:00.508808] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.889 02:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.889 02:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:10.889 02:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:10.889 02:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:11.147 BaseBdev1_malloc 00:26:11.147 02:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:11.407 true 00:26:11.407 02:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:11.666 [2024-07-11 02:32:01.918826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:11.666 [2024-07-11 02:32:01.918872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.666 [2024-07-11 02:32:01.918900] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26c3330 00:26:11.666 [2024-07-11 02:32:01.918917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.666 [2024-07-11 02:32:01.920840] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.666 [2024-07-11 02:32:01.920874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:11.666 BaseBdev1 00:26:11.666 02:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:11.666 02:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:11.924 BaseBdev2_malloc 00:26:11.924 02:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:12.183 true 00:26:12.183 02:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:12.443 [2024-07-11 02:32:02.657453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:12.443 [2024-07-11 02:32:02.657504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.443 [2024-07-11 02:32:02.657531] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26bcb40 00:26:12.443 [2024-07-11 02:32:02.657546] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.443 [2024-07-11 02:32:02.659122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.443 [2024-07-11 02:32:02.659152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:12.443 BaseBdev2 00:26:12.443 02:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:12.443 02:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:12.702 BaseBdev3_malloc 00:26:12.702 02:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:12.962 true 00:26:12.962 02:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:13.221 [2024-07-11 02:32:03.405314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:13.221 [2024-07-11 02:32:03.405361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.221 [2024-07-11 02:32:03.405389] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26c00f0 00:26:13.221 [2024-07-11 02:32:03.405405] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.221 [2024-07-11 02:32:03.407083] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.221 [2024-07-11 02:32:03.407114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:13.221 BaseBdev3 00:26:13.221 02:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:13.221 02:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:13.479 BaseBdev4_malloc 00:26:13.479 02:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:13.737 true 00:26:13.737 02:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:13.737 [2024-07-11 02:32:04.141003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:13.737 [2024-07-11 02:32:04.141045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.737 [2024-07-11 02:32:04.141079] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25114c0 00:26:13.737 [2024-07-11 02:32:04.141097] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.737 [2024-07-11 02:32:04.142665] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.737 [2024-07-11 02:32:04.142695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:13.737 BaseBdev4 00:26:13.737 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:13.996 [2024-07-11 02:32:04.373652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.996 [2024-07-11 02:32:04.374965] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:13.996 [2024-07-11 02:32:04.375033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:13.996 [2024-07-11 02:32:04.375091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:13.996 [2024-07-11 02:32:04.375321] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26b8a20 00:26:13.996 [2024-07-11 02:32:04.375332] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:13.996 [2024-07-11 02:32:04.375525] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26b89f0 00:26:13.996 [2024-07-11 02:32:04.375683] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26b8a20 00:26:13.996 [2024-07-11 02:32:04.375693] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26b8a20 00:26:13.996 [2024-07-11 02:32:04.375810] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.996 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.255 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.255 "name": "raid_bdev1", 00:26:14.255 "uuid": "4e8e6154-fdcc-4b6b-9d6f-1dea04358b44", 00:26:14.255 "strip_size_kb": 0, 00:26:14.255 "state": "online", 00:26:14.255 "raid_level": "raid1", 00:26:14.255 "superblock": true, 00:26:14.255 "num_base_bdevs": 4, 00:26:14.255 "num_base_bdevs_discovered": 4, 00:26:14.255 "num_base_bdevs_operational": 4, 00:26:14.255 "base_bdevs_list": [ 00:26:14.255 { 00:26:14.255 "name": "BaseBdev1", 00:26:14.255 "uuid": "b6b91730-bec2-5d6f-bdb8-f4d8bf813821", 00:26:14.255 "is_configured": true, 00:26:14.255 "data_offset": 2048, 00:26:14.255 "data_size": 63488 00:26:14.255 }, 00:26:14.255 { 00:26:14.255 "name": "BaseBdev2", 00:26:14.255 "uuid": "fff22699-2d15-5bc5-b868-aee69a398965", 00:26:14.255 "is_configured": true, 00:26:14.255 "data_offset": 2048, 00:26:14.255 "data_size": 63488 00:26:14.255 }, 00:26:14.255 { 00:26:14.255 "name": "BaseBdev3", 00:26:14.255 "uuid": "7c64d03f-8e99-5c7a-a636-50db6523d4b9", 00:26:14.255 "is_configured": true, 00:26:14.255 "data_offset": 2048, 00:26:14.255 "data_size": 63488 00:26:14.255 }, 00:26:14.255 { 00:26:14.255 "name": "BaseBdev4", 00:26:14.255 "uuid": "f35db3a2-8787-55b9-809c-365581c91624", 00:26:14.255 "is_configured": true, 00:26:14.255 "data_offset": 2048, 00:26:14.255 "data_size": 63488 00:26:14.255 } 00:26:14.255 ] 00:26:14.255 }' 00:26:14.255 02:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.255 02:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.823 02:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:14.823 02:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:15.082 [2024-07-11 02:32:05.336444] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26b8930 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:16.018 [2024-07-11 02:32:06.399933] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:16.018 [2024-07-11 02:32:06.399991] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:16.018 [2024-07-11 02:32:06.400244] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x26b8930 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.018 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.278 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.278 "name": "raid_bdev1", 00:26:16.278 "uuid": "4e8e6154-fdcc-4b6b-9d6f-1dea04358b44", 00:26:16.278 "strip_size_kb": 0, 00:26:16.278 "state": "online", 00:26:16.278 "raid_level": "raid1", 00:26:16.278 "superblock": true, 00:26:16.278 "num_base_bdevs": 4, 00:26:16.278 "num_base_bdevs_discovered": 3, 00:26:16.278 "num_base_bdevs_operational": 3, 00:26:16.278 "base_bdevs_list": [ 00:26:16.278 { 00:26:16.278 "name": null, 00:26:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.278 "is_configured": false, 00:26:16.278 "data_offset": 2048, 00:26:16.278 "data_size": 63488 00:26:16.278 }, 00:26:16.278 { 00:26:16.278 "name": "BaseBdev2", 00:26:16.278 "uuid": "fff22699-2d15-5bc5-b868-aee69a398965", 00:26:16.278 "is_configured": true, 00:26:16.278 "data_offset": 2048, 00:26:16.278 "data_size": 63488 00:26:16.278 }, 00:26:16.278 { 00:26:16.278 "name": "BaseBdev3", 00:26:16.278 "uuid": "7c64d03f-8e99-5c7a-a636-50db6523d4b9", 00:26:16.278 "is_configured": true, 00:26:16.278 "data_offset": 2048, 00:26:16.278 "data_size": 63488 00:26:16.278 }, 00:26:16.278 { 00:26:16.278 "name": "BaseBdev4", 00:26:16.278 "uuid": "f35db3a2-8787-55b9-809c-365581c91624", 00:26:16.278 "is_configured": true, 00:26:16.278 "data_offset": 2048, 00:26:16.278 "data_size": 63488 00:26:16.278 } 00:26:16.278 ] 00:26:16.278 }' 00:26:16.278 02:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.278 02:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.844 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:17.103 [2024-07-11 02:32:07.434698] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.103 [2024-07-11 02:32:07.434737] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.103 [2024-07-11 02:32:07.438019] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.103 [2024-07-11 02:32:07.438053] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.103 [2024-07-11 02:32:07.438149] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.103 [2024-07-11 02:32:07.438160] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26b8a20 name raid_bdev1, state offline 00:26:17.103 0 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2006298 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2006298 ']' 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2006298 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2006298 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2006298' 00:26:17.103 killing process with pid 2006298 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2006298 00:26:17.103 [2024-07-11 02:32:07.521826] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:17.103 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2006298 00:26:17.363 [2024-07-11 02:32:07.553324] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.igbcH1pHRY 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:17.363 00:26:17.363 real 0m7.582s 00:26:17.363 user 0m12.137s 00:26:17.363 sys 0m1.328s 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:17.363 02:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.363 ************************************ 00:26:17.363 END TEST raid_write_error_test 00:26:17.363 ************************************ 00:26:17.623 02:32:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:17.623 02:32:07 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:26:17.623 02:32:07 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:26:17.623 02:32:07 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:26:17.623 02:32:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:17.623 02:32:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.623 02:32:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:17.623 ************************************ 00:26:17.623 START TEST raid_rebuild_test 00:26:17.623 ************************************ 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2007367 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2007367 /var/tmp/spdk-raid.sock 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2007367 ']' 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:17.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.623 02:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.623 [2024-07-11 02:32:07.940374] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:26:17.623 [2024-07-11 02:32:07.940451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007367 ] 00:26:17.623 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:17.623 Zero copy mechanism will not be used. 00:26:17.882 [2024-07-11 02:32:08.091073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.882 [2024-07-11 02:32:08.140222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.882 [2024-07-11 02:32:08.198241] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:17.882 [2024-07-11 02:32:08.198278] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:18.815 02:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.815 02:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:26:18.815 02:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:18.815 02:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:18.815 BaseBdev1_malloc 00:26:18.815 02:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:19.072 [2024-07-11 02:32:09.351372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:19.072 [2024-07-11 02:32:09.351419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.072 [2024-07-11 02:32:09.351442] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20f9ee0 00:26:19.072 [2024-07-11 02:32:09.351460] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.072 [2024-07-11 02:32:09.353282] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.072 [2024-07-11 02:32:09.353311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:19.072 BaseBdev1 00:26:19.072 02:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:19.072 02:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:19.330 BaseBdev2_malloc 00:26:19.330 02:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:19.588 [2024-07-11 02:32:09.846622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:19.588 [2024-07-11 02:32:09.846666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.589 [2024-07-11 02:32:09.846686] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20fb870 00:26:19.589 [2024-07-11 02:32:09.846700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.589 [2024-07-11 02:32:09.848238] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.589 [2024-07-11 02:32:09.848266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:19.589 BaseBdev2 00:26:19.589 02:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:19.847 spare_malloc 00:26:19.847 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:20.105 spare_delay 00:26:20.105 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:20.364 [2024-07-11 02:32:10.589049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:20.364 [2024-07-11 02:32:10.589092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.364 [2024-07-11 02:32:10.589113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20f61d0 00:26:20.364 [2024-07-11 02:32:10.589126] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.364 [2024-07-11 02:32:10.590633] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.364 [2024-07-11 02:32:10.590661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:20.364 spare 00:26:20.365 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:20.623 [2024-07-11 02:32:10.833708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:20.623 [2024-07-11 02:32:10.834986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:20.623 [2024-07-11 02:32:10.835060] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20f6d30 00:26:20.623 [2024-07-11 02:32:10.835072] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:20.623 [2024-07-11 02:32:10.835270] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20e79d0 00:26:20.623 [2024-07-11 02:32:10.835407] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20f6d30 00:26:20.623 [2024-07-11 02:32:10.835417] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20f6d30 00:26:20.623 [2024-07-11 02:32:10.835525] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.623 02:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.883 02:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.883 "name": "raid_bdev1", 00:26:20.883 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:20.883 "strip_size_kb": 0, 00:26:20.883 "state": "online", 00:26:20.883 "raid_level": "raid1", 00:26:20.883 "superblock": false, 00:26:20.883 "num_base_bdevs": 2, 00:26:20.883 "num_base_bdevs_discovered": 2, 00:26:20.883 "num_base_bdevs_operational": 2, 00:26:20.883 "base_bdevs_list": [ 00:26:20.883 { 00:26:20.883 "name": "BaseBdev1", 00:26:20.883 "uuid": "9f2b935c-3de3-5f31-aa26-78606d708214", 00:26:20.883 "is_configured": true, 00:26:20.883 "data_offset": 0, 00:26:20.883 "data_size": 65536 00:26:20.883 }, 00:26:20.883 { 00:26:20.883 "name": "BaseBdev2", 00:26:20.883 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:20.883 "is_configured": true, 00:26:20.883 "data_offset": 0, 00:26:20.883 "data_size": 65536 00:26:20.883 } 00:26:20.883 ] 00:26:20.883 }' 00:26:20.883 02:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.883 02:32:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.463 02:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:21.463 02:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:21.732 [2024-07-11 02:32:11.972963] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.732 02:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:26:21.732 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.732 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:21.990 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:21.990 [2024-07-11 02:32:12.409926] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20f9590 00:26:22.249 /dev/nbd0 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:22.249 1+0 records in 00:26:22.249 1+0 records out 00:26:22.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026086 s, 15.7 MB/s 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:22.249 02:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:28.879 65536+0 records in 00:26:28.879 65536+0 records out 00:26:28.879 33554432 bytes (34 MB, 32 MiB) copied, 6.08885 s, 5.5 MB/s 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:28.879 [2024-07-11 02:32:18.762447] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:28.879 02:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:28.879 [2024-07-11 02:32:19.003132] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.879 "name": "raid_bdev1", 00:26:28.879 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:28.879 "strip_size_kb": 0, 00:26:28.879 "state": "online", 00:26:28.879 "raid_level": "raid1", 00:26:28.879 "superblock": false, 00:26:28.879 "num_base_bdevs": 2, 00:26:28.879 "num_base_bdevs_discovered": 1, 00:26:28.879 "num_base_bdevs_operational": 1, 00:26:28.879 "base_bdevs_list": [ 00:26:28.879 { 00:26:28.879 "name": null, 00:26:28.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.879 "is_configured": false, 00:26:28.879 "data_offset": 0, 00:26:28.879 "data_size": 65536 00:26:28.879 }, 00:26:28.879 { 00:26:28.879 "name": "BaseBdev2", 00:26:28.879 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:28.879 "is_configured": true, 00:26:28.879 "data_offset": 0, 00:26:28.879 "data_size": 65536 00:26:28.879 } 00:26:28.879 ] 00:26:28.879 }' 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.879 02:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.483 02:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:29.742 [2024-07-11 02:32:20.053936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:29.742 [2024-07-11 02:32:20.058710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20f9b30 00:26:29.742 [2024-07-11 02:32:20.060967] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:29.742 02:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:30.679 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.679 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:30.679 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:30.679 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:30.679 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:30.679 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.679 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.938 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:30.938 "name": "raid_bdev1", 00:26:30.938 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:30.938 "strip_size_kb": 0, 00:26:30.938 "state": "online", 00:26:30.938 "raid_level": "raid1", 00:26:30.938 "superblock": false, 00:26:30.938 "num_base_bdevs": 2, 00:26:30.938 "num_base_bdevs_discovered": 2, 00:26:30.938 "num_base_bdevs_operational": 2, 00:26:30.938 "process": { 00:26:30.938 "type": "rebuild", 00:26:30.938 "target": "spare", 00:26:30.938 "progress": { 00:26:30.938 "blocks": 22528, 00:26:30.938 "percent": 34 00:26:30.938 } 00:26:30.938 }, 00:26:30.938 "base_bdevs_list": [ 00:26:30.938 { 00:26:30.938 "name": "spare", 00:26:30.938 "uuid": "064511bb-b7a9-55be-a966-f55e2aef5f42", 00:26:30.938 "is_configured": true, 00:26:30.938 "data_offset": 0, 00:26:30.938 "data_size": 65536 00:26:30.938 }, 00:26:30.938 { 00:26:30.938 "name": "BaseBdev2", 00:26:30.938 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:30.939 "is_configured": true, 00:26:30.939 "data_offset": 0, 00:26:30.939 "data_size": 65536 00:26:30.939 } 00:26:30.939 ] 00:26:30.939 }' 00:26:30.939 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:30.939 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.939 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:31.198 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:31.198 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:31.198 [2024-07-11 02:32:21.616530] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:31.457 [2024-07-11 02:32:21.673169] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:31.458 [2024-07-11 02:32:21.673216] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.458 [2024-07-11 02:32:21.673236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:31.458 [2024-07-11 02:32:21.673247] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.458 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.717 02:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.718 "name": "raid_bdev1", 00:26:31.718 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:31.718 "strip_size_kb": 0, 00:26:31.718 "state": "online", 00:26:31.718 "raid_level": "raid1", 00:26:31.718 "superblock": false, 00:26:31.718 "num_base_bdevs": 2, 00:26:31.718 "num_base_bdevs_discovered": 1, 00:26:31.718 "num_base_bdevs_operational": 1, 00:26:31.718 "base_bdevs_list": [ 00:26:31.718 { 00:26:31.718 "name": null, 00:26:31.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.718 "is_configured": false, 00:26:31.718 "data_offset": 0, 00:26:31.718 "data_size": 65536 00:26:31.718 }, 00:26:31.718 { 00:26:31.718 "name": "BaseBdev2", 00:26:31.718 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:31.718 "is_configured": true, 00:26:31.718 "data_offset": 0, 00:26:31.718 "data_size": 65536 00:26:31.718 } 00:26:31.718 ] 00:26:31.718 }' 00:26:31.718 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.718 02:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.656 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:32.656 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.656 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:32.656 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:32.656 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.656 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.656 02:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.916 02:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.916 "name": "raid_bdev1", 00:26:32.916 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:32.916 "strip_size_kb": 0, 00:26:32.916 "state": "online", 00:26:32.916 "raid_level": "raid1", 00:26:32.916 "superblock": false, 00:26:32.916 "num_base_bdevs": 2, 00:26:32.916 "num_base_bdevs_discovered": 1, 00:26:32.916 "num_base_bdevs_operational": 1, 00:26:32.916 "base_bdevs_list": [ 00:26:32.916 { 00:26:32.916 "name": null, 00:26:32.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.916 "is_configured": false, 00:26:32.916 "data_offset": 0, 00:26:32.916 "data_size": 65536 00:26:32.916 }, 00:26:32.916 { 00:26:32.916 "name": "BaseBdev2", 00:26:32.916 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:32.916 "is_configured": true, 00:26:32.916 "data_offset": 0, 00:26:32.916 "data_size": 65536 00:26:32.916 } 00:26:32.916 ] 00:26:32.916 }' 00:26:32.916 02:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.916 02:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:32.916 02:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.916 02:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:32.916 02:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:33.175 [2024-07-11 02:32:23.455013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:33.175 [2024-07-11 02:32:23.460464] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20f9590 00:26:33.175 [2024-07-11 02:32:23.461936] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:33.175 02:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:34.110 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:34.110 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:34.110 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:34.110 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:34.110 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:34.110 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.110 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.369 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.369 "name": "raid_bdev1", 00:26:34.369 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:34.369 "strip_size_kb": 0, 00:26:34.369 "state": "online", 00:26:34.369 "raid_level": "raid1", 00:26:34.369 "superblock": false, 00:26:34.369 "num_base_bdevs": 2, 00:26:34.369 "num_base_bdevs_discovered": 2, 00:26:34.369 "num_base_bdevs_operational": 2, 00:26:34.369 "process": { 00:26:34.369 "type": "rebuild", 00:26:34.369 "target": "spare", 00:26:34.369 "progress": { 00:26:34.369 "blocks": 24576, 00:26:34.369 "percent": 37 00:26:34.369 } 00:26:34.369 }, 00:26:34.369 "base_bdevs_list": [ 00:26:34.369 { 00:26:34.369 "name": "spare", 00:26:34.369 "uuid": "064511bb-b7a9-55be-a966-f55e2aef5f42", 00:26:34.369 "is_configured": true, 00:26:34.369 "data_offset": 0, 00:26:34.369 "data_size": 65536 00:26:34.369 }, 00:26:34.369 { 00:26:34.369 "name": "BaseBdev2", 00:26:34.369 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:34.369 "is_configured": true, 00:26:34.369 "data_offset": 0, 00:26:34.369 "data_size": 65536 00:26:34.369 } 00:26:34.369 ] 00:26:34.369 }' 00:26:34.369 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=797 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.628 02:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.887 02:32:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.887 "name": "raid_bdev1", 00:26:34.887 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:34.887 "strip_size_kb": 0, 00:26:34.887 "state": "online", 00:26:34.887 "raid_level": "raid1", 00:26:34.887 "superblock": false, 00:26:34.887 "num_base_bdevs": 2, 00:26:34.887 "num_base_bdevs_discovered": 2, 00:26:34.888 "num_base_bdevs_operational": 2, 00:26:34.888 "process": { 00:26:34.888 "type": "rebuild", 00:26:34.888 "target": "spare", 00:26:34.888 "progress": { 00:26:34.888 "blocks": 32768, 00:26:34.888 "percent": 50 00:26:34.888 } 00:26:34.888 }, 00:26:34.888 "base_bdevs_list": [ 00:26:34.888 { 00:26:34.888 "name": "spare", 00:26:34.888 "uuid": "064511bb-b7a9-55be-a966-f55e2aef5f42", 00:26:34.888 "is_configured": true, 00:26:34.888 "data_offset": 0, 00:26:34.888 "data_size": 65536 00:26:34.888 }, 00:26:34.888 { 00:26:34.888 "name": "BaseBdev2", 00:26:34.888 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:34.888 "is_configured": true, 00:26:34.888 "data_offset": 0, 00:26:34.888 "data_size": 65536 00:26:34.888 } 00:26:34.888 ] 00:26:34.888 }' 00:26:34.888 02:32:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.888 02:32:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.888 02:32:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.888 02:32:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.888 02:32:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:36.263 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:36.263 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:36.263 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:36.263 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:36.264 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:36.264 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:36.264 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.264 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.264 [2024-07-11 02:32:26.686454] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:36.264 [2024-07-11 02:32:26.686514] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:36.264 [2024-07-11 02:32:26.686557] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.521 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:36.521 "name": "raid_bdev1", 00:26:36.521 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:36.521 "strip_size_kb": 0, 00:26:36.521 "state": "online", 00:26:36.521 "raid_level": "raid1", 00:26:36.521 "superblock": false, 00:26:36.521 "num_base_bdevs": 2, 00:26:36.521 "num_base_bdevs_discovered": 2, 00:26:36.521 "num_base_bdevs_operational": 2, 00:26:36.521 "base_bdevs_list": [ 00:26:36.521 { 00:26:36.521 "name": "spare", 00:26:36.521 "uuid": "064511bb-b7a9-55be-a966-f55e2aef5f42", 00:26:36.521 "is_configured": true, 00:26:36.521 "data_offset": 0, 00:26:36.521 "data_size": 65536 00:26:36.521 }, 00:26:36.521 { 00:26:36.521 "name": "BaseBdev2", 00:26:36.521 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:36.521 "is_configured": true, 00:26:36.521 "data_offset": 0, 00:26:36.521 "data_size": 65536 00:26:36.521 } 00:26:36.521 ] 00:26:36.521 }' 00:26:36.521 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:36.521 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:36.521 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.813 02:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.813 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:36.813 "name": "raid_bdev1", 00:26:36.813 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:36.813 "strip_size_kb": 0, 00:26:36.813 "state": "online", 00:26:36.813 "raid_level": "raid1", 00:26:36.813 "superblock": false, 00:26:36.813 "num_base_bdevs": 2, 00:26:36.813 "num_base_bdevs_discovered": 2, 00:26:36.813 "num_base_bdevs_operational": 2, 00:26:36.813 "base_bdevs_list": [ 00:26:36.813 { 00:26:36.813 "name": "spare", 00:26:36.813 "uuid": "064511bb-b7a9-55be-a966-f55e2aef5f42", 00:26:36.813 "is_configured": true, 00:26:36.813 "data_offset": 0, 00:26:36.813 "data_size": 65536 00:26:36.813 }, 00:26:36.813 { 00:26:36.813 "name": "BaseBdev2", 00:26:36.813 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:36.813 "is_configured": true, 00:26:36.813 "data_offset": 0, 00:26:36.813 "data_size": 65536 00:26:36.813 } 00:26:36.813 ] 00:26:36.813 }' 00:26:36.813 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.070 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.328 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.328 "name": "raid_bdev1", 00:26:37.328 "uuid": "c024564a-a9d8-450e-b732-0f438256304f", 00:26:37.328 "strip_size_kb": 0, 00:26:37.328 "state": "online", 00:26:37.328 "raid_level": "raid1", 00:26:37.328 "superblock": false, 00:26:37.328 "num_base_bdevs": 2, 00:26:37.328 "num_base_bdevs_discovered": 2, 00:26:37.328 "num_base_bdevs_operational": 2, 00:26:37.328 "base_bdevs_list": [ 00:26:37.328 { 00:26:37.328 "name": "spare", 00:26:37.328 "uuid": "064511bb-b7a9-55be-a966-f55e2aef5f42", 00:26:37.328 "is_configured": true, 00:26:37.328 "data_offset": 0, 00:26:37.328 "data_size": 65536 00:26:37.328 }, 00:26:37.328 { 00:26:37.328 "name": "BaseBdev2", 00:26:37.328 "uuid": "a3734c39-119e-5037-9972-cd1dac8998a9", 00:26:37.328 "is_configured": true, 00:26:37.328 "data_offset": 0, 00:26:37.328 "data_size": 65536 00:26:37.328 } 00:26:37.328 ] 00:26:37.328 }' 00:26:37.328 02:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.328 02:32:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.892 02:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:38.149 [2024-07-11 02:32:28.447745] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:38.149 [2024-07-11 02:32:28.447776] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:38.149 [2024-07-11 02:32:28.447829] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:38.149 [2024-07-11 02:32:28.447883] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:38.149 [2024-07-11 02:32:28.447895] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f6d30 name raid_bdev1, state offline 00:26:38.149 02:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.149 02:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:38.408 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:38.668 /dev/nbd0 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.668 1+0 records in 00:26:38.668 1+0 records out 00:26:38.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255058 s, 16.1 MB/s 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:38.668 02:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:38.927 /dev/nbd1 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.927 1+0 records in 00:26:38.927 1+0 records out 00:26:38.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328891 s, 12.5 MB/s 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:38.927 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:39.186 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:39.445 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:39.445 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:39.445 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:39.445 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2007367 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2007367 ']' 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2007367 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2007367 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2007367' 00:26:39.446 killing process with pid 2007367 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2007367 00:26:39.446 Received shutdown signal, test time was about 60.000000 seconds 00:26:39.446 00:26:39.446 Latency(us) 00:26:39.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.446 =================================================================================================================== 00:26:39.446 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:39.446 [2024-07-11 02:32:29.735565] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:39.446 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2007367 00:26:39.446 [2024-07-11 02:32:29.763361] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:39.705 02:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:26:39.705 00:26:39.705 real 0m22.094s 00:26:39.705 user 0m29.671s 00:26:39.705 sys 0m5.286s 00:26:39.705 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.705 02:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.705 ************************************ 00:26:39.705 END TEST raid_rebuild_test 00:26:39.705 ************************************ 00:26:39.705 02:32:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:39.705 02:32:30 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:26:39.705 02:32:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:39.705 02:32:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.705 02:32:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:39.705 ************************************ 00:26:39.705 START TEST raid_rebuild_test_sb 00:26:39.705 ************************************ 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2010420 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2010420 /var/tmp/spdk-raid.sock 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2010420 ']' 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:39.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.705 02:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:39.705 [2024-07-11 02:32:30.125086] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:26:39.705 [2024-07-11 02:32:30.125150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2010420 ] 00:26:39.705 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:39.705 Zero copy mechanism will not be used. 00:26:39.965 [2024-07-11 02:32:30.262140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.965 [2024-07-11 02:32:30.310509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.965 [2024-07-11 02:32:30.368061] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.965 [2024-07-11 02:32:30.368088] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:40.901 02:32:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.901 02:32:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:26:40.901 02:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:40.901 02:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:40.901 BaseBdev1_malloc 00:26:40.901 02:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:41.160 [2024-07-11 02:32:31.400035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:41.160 [2024-07-11 02:32:31.400084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.160 [2024-07-11 02:32:31.400115] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x232fee0 00:26:41.160 [2024-07-11 02:32:31.400132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.160 [2024-07-11 02:32:31.401839] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.160 [2024-07-11 02:32:31.401869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:41.160 BaseBdev1 00:26:41.160 02:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:41.160 02:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:41.420 BaseBdev2_malloc 00:26:41.420 02:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:41.420 [2024-07-11 02:32:31.773783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:41.420 [2024-07-11 02:32:31.773832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.420 [2024-07-11 02:32:31.773860] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2331870 00:26:41.420 [2024-07-11 02:32:31.773877] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.420 [2024-07-11 02:32:31.775294] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.420 [2024-07-11 02:32:31.775323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:41.420 BaseBdev2 00:26:41.420 02:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:41.988 spare_malloc 00:26:41.988 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:42.246 spare_delay 00:26:42.246 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:42.246 [2024-07-11 02:32:32.661799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:42.246 [2024-07-11 02:32:32.661847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.246 [2024-07-11 02:32:32.661882] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x232c1d0 00:26:42.246 [2024-07-11 02:32:32.661899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.246 [2024-07-11 02:32:32.663386] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.246 [2024-07-11 02:32:32.663418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:42.246 spare 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:42.504 [2024-07-11 02:32:32.842301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:42.504 [2024-07-11 02:32:32.843479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:42.504 [2024-07-11 02:32:32.843625] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x232cd30 00:26:42.504 [2024-07-11 02:32:32.843638] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:42.504 [2024-07-11 02:32:32.843830] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x232e6d0 00:26:42.504 [2024-07-11 02:32:32.843964] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x232cd30 00:26:42.504 [2024-07-11 02:32:32.843974] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x232cd30 00:26:42.504 [2024-07-11 02:32:32.844066] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.504 02:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.764 02:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.764 "name": "raid_bdev1", 00:26:42.764 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:42.764 "strip_size_kb": 0, 00:26:42.764 "state": "online", 00:26:42.764 "raid_level": "raid1", 00:26:42.764 "superblock": true, 00:26:42.764 "num_base_bdevs": 2, 00:26:42.764 "num_base_bdevs_discovered": 2, 00:26:42.764 "num_base_bdevs_operational": 2, 00:26:42.764 "base_bdevs_list": [ 00:26:42.764 { 00:26:42.764 "name": "BaseBdev1", 00:26:42.764 "uuid": "5d4737b9-f8c0-5e5d-8bbe-35d1901cc41c", 00:26:42.764 "is_configured": true, 00:26:42.764 "data_offset": 2048, 00:26:42.764 "data_size": 63488 00:26:42.764 }, 00:26:42.764 { 00:26:42.764 "name": "BaseBdev2", 00:26:42.764 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:42.764 "is_configured": true, 00:26:42.764 "data_offset": 2048, 00:26:42.764 "data_size": 63488 00:26:42.764 } 00:26:42.764 ] 00:26:42.764 }' 00:26:42.764 02:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.764 02:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.331 02:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:43.331 02:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:43.590 [2024-07-11 02:32:33.885282] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:43.590 02:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:26:43.590 02:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.590 02:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:43.849 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:26:43.849 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:43.849 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:43.849 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:43.849 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:43.849 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:43.850 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:44.109 [2024-07-11 02:32:34.386406] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2194fc0 00:26:44.109 /dev/nbd0 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:44.109 1+0 records in 00:26:44.109 1+0 records out 00:26:44.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284462 s, 14.4 MB/s 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:44.109 02:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:26:50.677 63488+0 records in 00:26:50.677 63488+0 records out 00:26:50.677 32505856 bytes (33 MB, 31 MiB) copied, 5.84083 s, 5.6 MB/s 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:50.677 [2024-07-11 02:32:40.563397] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:50.677 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:50.677 [2024-07-11 02:32:40.804186] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.678 02:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.678 02:32:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.678 "name": "raid_bdev1", 00:26:50.678 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:50.678 "strip_size_kb": 0, 00:26:50.678 "state": "online", 00:26:50.678 "raid_level": "raid1", 00:26:50.678 "superblock": true, 00:26:50.678 "num_base_bdevs": 2, 00:26:50.678 "num_base_bdevs_discovered": 1, 00:26:50.678 "num_base_bdevs_operational": 1, 00:26:50.678 "base_bdevs_list": [ 00:26:50.678 { 00:26:50.678 "name": null, 00:26:50.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.678 "is_configured": false, 00:26:50.678 "data_offset": 2048, 00:26:50.678 "data_size": 63488 00:26:50.678 }, 00:26:50.678 { 00:26:50.678 "name": "BaseBdev2", 00:26:50.678 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:50.678 "is_configured": true, 00:26:50.678 "data_offset": 2048, 00:26:50.678 "data_size": 63488 00:26:50.678 } 00:26:50.678 ] 00:26:50.678 }' 00:26:50.678 02:32:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.678 02:32:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.244 02:32:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:51.503 [2024-07-11 02:32:41.867007] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:51.503 [2024-07-11 02:32:41.871819] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x232e860 00:26:51.503 [2024-07-11 02:32:41.874076] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:51.503 02:32:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:52.877 02:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.877 02:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.877 02:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:52.877 02:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:52.877 02:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.877 02:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.877 02:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.877 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.877 "name": "raid_bdev1", 00:26:52.877 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:52.877 "strip_size_kb": 0, 00:26:52.877 "state": "online", 00:26:52.877 "raid_level": "raid1", 00:26:52.877 "superblock": true, 00:26:52.877 "num_base_bdevs": 2, 00:26:52.877 "num_base_bdevs_discovered": 2, 00:26:52.877 "num_base_bdevs_operational": 2, 00:26:52.877 "process": { 00:26:52.877 "type": "rebuild", 00:26:52.877 "target": "spare", 00:26:52.877 "progress": { 00:26:52.877 "blocks": 24576, 00:26:52.877 "percent": 38 00:26:52.877 } 00:26:52.877 }, 00:26:52.877 "base_bdevs_list": [ 00:26:52.877 { 00:26:52.877 "name": "spare", 00:26:52.877 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:26:52.877 "is_configured": true, 00:26:52.877 "data_offset": 2048, 00:26:52.878 "data_size": 63488 00:26:52.878 }, 00:26:52.878 { 00:26:52.878 "name": "BaseBdev2", 00:26:52.878 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:52.878 "is_configured": true, 00:26:52.878 "data_offset": 2048, 00:26:52.878 "data_size": 63488 00:26:52.878 } 00:26:52.878 ] 00:26:52.878 }' 00:26:52.878 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.878 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.878 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.878 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.878 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:53.136 [2024-07-11 02:32:43.485320] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:53.136 [2024-07-11 02:32:43.486624] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:53.136 [2024-07-11 02:32:43.486669] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:53.136 [2024-07-11 02:32:43.486688] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:53.136 [2024-07-11 02:32:43.486700] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.136 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.394 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:53.394 "name": "raid_bdev1", 00:26:53.394 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:53.394 "strip_size_kb": 0, 00:26:53.394 "state": "online", 00:26:53.394 "raid_level": "raid1", 00:26:53.394 "superblock": true, 00:26:53.394 "num_base_bdevs": 2, 00:26:53.394 "num_base_bdevs_discovered": 1, 00:26:53.394 "num_base_bdevs_operational": 1, 00:26:53.394 "base_bdevs_list": [ 00:26:53.394 { 00:26:53.394 "name": null, 00:26:53.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.394 "is_configured": false, 00:26:53.394 "data_offset": 2048, 00:26:53.395 "data_size": 63488 00:26:53.395 }, 00:26:53.395 { 00:26:53.395 "name": "BaseBdev2", 00:26:53.395 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:53.395 "is_configured": true, 00:26:53.395 "data_offset": 2048, 00:26:53.395 "data_size": 63488 00:26:53.395 } 00:26:53.395 ] 00:26:53.395 }' 00:26:53.395 02:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:53.395 02:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.960 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:53.960 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.960 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:53.960 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:53.960 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.960 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.960 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.218 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.218 "name": "raid_bdev1", 00:26:54.218 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:54.218 "strip_size_kb": 0, 00:26:54.218 "state": "online", 00:26:54.218 "raid_level": "raid1", 00:26:54.218 "superblock": true, 00:26:54.218 "num_base_bdevs": 2, 00:26:54.218 "num_base_bdevs_discovered": 1, 00:26:54.218 "num_base_bdevs_operational": 1, 00:26:54.218 "base_bdevs_list": [ 00:26:54.218 { 00:26:54.218 "name": null, 00:26:54.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.218 "is_configured": false, 00:26:54.218 "data_offset": 2048, 00:26:54.218 "data_size": 63488 00:26:54.218 }, 00:26:54.218 { 00:26:54.218 "name": "BaseBdev2", 00:26:54.218 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:54.219 "is_configured": true, 00:26:54.219 "data_offset": 2048, 00:26:54.219 "data_size": 63488 00:26:54.219 } 00:26:54.219 ] 00:26:54.219 }' 00:26:54.219 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.477 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:54.477 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.477 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:54.477 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:54.736 [2024-07-11 02:32:44.923147] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:54.736 [2024-07-11 02:32:44.928561] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2194fc0 00:26:54.736 [2024-07-11 02:32:44.930066] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:54.736 02:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:55.672 02:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.672 02:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.672 02:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:55.672 02:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:55.672 02:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.672 02:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.672 02:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.932 "name": "raid_bdev1", 00:26:55.932 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:55.932 "strip_size_kb": 0, 00:26:55.932 "state": "online", 00:26:55.932 "raid_level": "raid1", 00:26:55.932 "superblock": true, 00:26:55.932 "num_base_bdevs": 2, 00:26:55.932 "num_base_bdevs_discovered": 2, 00:26:55.932 "num_base_bdevs_operational": 2, 00:26:55.932 "process": { 00:26:55.932 "type": "rebuild", 00:26:55.932 "target": "spare", 00:26:55.932 "progress": { 00:26:55.932 "blocks": 24576, 00:26:55.932 "percent": 38 00:26:55.932 } 00:26:55.932 }, 00:26:55.932 "base_bdevs_list": [ 00:26:55.932 { 00:26:55.932 "name": "spare", 00:26:55.932 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:26:55.932 "is_configured": true, 00:26:55.932 "data_offset": 2048, 00:26:55.932 "data_size": 63488 00:26:55.932 }, 00:26:55.932 { 00:26:55.932 "name": "BaseBdev2", 00:26:55.932 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:55.932 "is_configured": true, 00:26:55.932 "data_offset": 2048, 00:26:55.932 "data_size": 63488 00:26:55.932 } 00:26:55.932 ] 00:26:55.932 }' 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:55.932 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=819 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.932 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.191 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:56.191 "name": "raid_bdev1", 00:26:56.191 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:56.191 "strip_size_kb": 0, 00:26:56.191 "state": "online", 00:26:56.191 "raid_level": "raid1", 00:26:56.191 "superblock": true, 00:26:56.191 "num_base_bdevs": 2, 00:26:56.191 "num_base_bdevs_discovered": 2, 00:26:56.191 "num_base_bdevs_operational": 2, 00:26:56.191 "process": { 00:26:56.191 "type": "rebuild", 00:26:56.191 "target": "spare", 00:26:56.191 "progress": { 00:26:56.191 "blocks": 30720, 00:26:56.191 "percent": 48 00:26:56.191 } 00:26:56.191 }, 00:26:56.191 "base_bdevs_list": [ 00:26:56.191 { 00:26:56.191 "name": "spare", 00:26:56.191 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:26:56.191 "is_configured": true, 00:26:56.191 "data_offset": 2048, 00:26:56.191 "data_size": 63488 00:26:56.191 }, 00:26:56.191 { 00:26:56.191 "name": "BaseBdev2", 00:26:56.191 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:56.191 "is_configured": true, 00:26:56.191 "data_offset": 2048, 00:26:56.191 "data_size": 63488 00:26:56.191 } 00:26:56.191 ] 00:26:56.191 }' 00:26:56.191 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:56.191 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:56.191 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:56.449 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:56.449 02:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.386 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.645 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:57.645 "name": "raid_bdev1", 00:26:57.645 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:57.645 "strip_size_kb": 0, 00:26:57.645 "state": "online", 00:26:57.645 "raid_level": "raid1", 00:26:57.645 "superblock": true, 00:26:57.645 "num_base_bdevs": 2, 00:26:57.645 "num_base_bdevs_discovered": 2, 00:26:57.645 "num_base_bdevs_operational": 2, 00:26:57.645 "process": { 00:26:57.645 "type": "rebuild", 00:26:57.645 "target": "spare", 00:26:57.645 "progress": { 00:26:57.645 "blocks": 59392, 00:26:57.645 "percent": 93 00:26:57.645 } 00:26:57.645 }, 00:26:57.645 "base_bdevs_list": [ 00:26:57.645 { 00:26:57.645 "name": "spare", 00:26:57.645 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:26:57.645 "is_configured": true, 00:26:57.645 "data_offset": 2048, 00:26:57.646 "data_size": 63488 00:26:57.646 }, 00:26:57.646 { 00:26:57.646 "name": "BaseBdev2", 00:26:57.646 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:57.646 "is_configured": true, 00:26:57.646 "data_offset": 2048, 00:26:57.646 "data_size": 63488 00:26:57.646 } 00:26:57.646 ] 00:26:57.646 }' 00:26:57.646 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:57.646 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:57.646 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:57.646 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:57.646 02:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:57.646 [2024-07-11 02:32:48.053730] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:57.646 [2024-07-11 02:32:48.053789] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:57.646 [2024-07-11 02:32:48.053882] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.658 02:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.916 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:58.916 "name": "raid_bdev1", 00:26:58.916 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:58.916 "strip_size_kb": 0, 00:26:58.916 "state": "online", 00:26:58.916 "raid_level": "raid1", 00:26:58.916 "superblock": true, 00:26:58.916 "num_base_bdevs": 2, 00:26:58.916 "num_base_bdevs_discovered": 2, 00:26:58.916 "num_base_bdevs_operational": 2, 00:26:58.916 "base_bdevs_list": [ 00:26:58.916 { 00:26:58.916 "name": "spare", 00:26:58.916 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:26:58.916 "is_configured": true, 00:26:58.916 "data_offset": 2048, 00:26:58.916 "data_size": 63488 00:26:58.916 }, 00:26:58.916 { 00:26:58.916 "name": "BaseBdev2", 00:26:58.916 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:58.916 "is_configured": true, 00:26:58.916 "data_offset": 2048, 00:26:58.916 "data_size": 63488 00:26:58.916 } 00:26:58.916 ] 00:26:58.917 }' 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.917 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.175 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:59.175 "name": "raid_bdev1", 00:26:59.175 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:59.175 "strip_size_kb": 0, 00:26:59.175 "state": "online", 00:26:59.175 "raid_level": "raid1", 00:26:59.175 "superblock": true, 00:26:59.175 "num_base_bdevs": 2, 00:26:59.175 "num_base_bdevs_discovered": 2, 00:26:59.175 "num_base_bdevs_operational": 2, 00:26:59.175 "base_bdevs_list": [ 00:26:59.175 { 00:26:59.175 "name": "spare", 00:26:59.175 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:26:59.175 "is_configured": true, 00:26:59.175 "data_offset": 2048, 00:26:59.175 "data_size": 63488 00:26:59.175 }, 00:26:59.175 { 00:26:59.175 "name": "BaseBdev2", 00:26:59.175 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:59.175 "is_configured": true, 00:26:59.175 "data_offset": 2048, 00:26:59.175 "data_size": 63488 00:26:59.175 } 00:26:59.175 ] 00:26:59.175 }' 00:26:59.175 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.434 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.693 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.693 "name": "raid_bdev1", 00:26:59.693 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:26:59.693 "strip_size_kb": 0, 00:26:59.693 "state": "online", 00:26:59.693 "raid_level": "raid1", 00:26:59.693 "superblock": true, 00:26:59.693 "num_base_bdevs": 2, 00:26:59.693 "num_base_bdevs_discovered": 2, 00:26:59.693 "num_base_bdevs_operational": 2, 00:26:59.693 "base_bdevs_list": [ 00:26:59.693 { 00:26:59.693 "name": "spare", 00:26:59.693 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:26:59.693 "is_configured": true, 00:26:59.693 "data_offset": 2048, 00:26:59.693 "data_size": 63488 00:26:59.693 }, 00:26:59.693 { 00:26:59.693 "name": "BaseBdev2", 00:26:59.693 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:26:59.693 "is_configured": true, 00:26:59.693 "data_offset": 2048, 00:26:59.693 "data_size": 63488 00:26:59.693 } 00:26:59.693 ] 00:26:59.693 }' 00:26:59.693 02:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.693 02:32:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.261 02:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:00.519 [2024-07-11 02:32:50.802394] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:00.519 [2024-07-11 02:32:50.802424] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:00.519 [2024-07-11 02:32:50.802482] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:00.519 [2024-07-11 02:32:50.802539] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:00.519 [2024-07-11 02:32:50.802551] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232cd30 name raid_bdev1, state offline 00:27:00.519 02:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.519 02:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:27:00.778 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:00.778 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:00.779 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:01.037 /dev/nbd0 00:27:01.037 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:01.037 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:01.038 1+0 records in 00:27:01.038 1+0 records out 00:27:01.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229956 s, 17.8 MB/s 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:01.038 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:01.297 /dev/nbd1 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:01.297 1+0 records in 00:27:01.297 1+0 records out 00:27:01.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344679 s, 11.9 MB/s 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:01.297 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.556 02:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:01.815 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:02.074 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:02.334 [2024-07-11 02:32:52.539767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:02.334 [2024-07-11 02:32:52.539813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.334 [2024-07-11 02:32:52.539845] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x217f360 00:27:02.334 [2024-07-11 02:32:52.539866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.334 [2024-07-11 02:32:52.541528] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.334 [2024-07-11 02:32:52.541558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:02.334 [2024-07-11 02:32:52.541654] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:02.334 [2024-07-11 02:32:52.541689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:02.334 [2024-07-11 02:32:52.541806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:02.334 spare 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.334 [2024-07-11 02:32:52.642125] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x232ca60 00:27:02.334 [2024-07-11 02:32:52.642143] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:02.334 [2024-07-11 02:32:52.642337] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x232c830 00:27:02.334 [2024-07-11 02:32:52.642490] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x232ca60 00:27:02.334 [2024-07-11 02:32:52.642501] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x232ca60 00:27:02.334 [2024-07-11 02:32:52.642603] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.334 "name": "raid_bdev1", 00:27:02.334 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:02.334 "strip_size_kb": 0, 00:27:02.334 "state": "online", 00:27:02.334 "raid_level": "raid1", 00:27:02.334 "superblock": true, 00:27:02.334 "num_base_bdevs": 2, 00:27:02.334 "num_base_bdevs_discovered": 2, 00:27:02.334 "num_base_bdevs_operational": 2, 00:27:02.334 "base_bdevs_list": [ 00:27:02.334 { 00:27:02.334 "name": "spare", 00:27:02.334 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:27:02.334 "is_configured": true, 00:27:02.334 "data_offset": 2048, 00:27:02.334 "data_size": 63488 00:27:02.334 }, 00:27:02.334 { 00:27:02.334 "name": "BaseBdev2", 00:27:02.334 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:02.334 "is_configured": true, 00:27:02.334 "data_offset": 2048, 00:27:02.334 "data_size": 63488 00:27:02.334 } 00:27:02.334 ] 00:27:02.334 }' 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.334 02:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:03.271 "name": "raid_bdev1", 00:27:03.271 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:03.271 "strip_size_kb": 0, 00:27:03.271 "state": "online", 00:27:03.271 "raid_level": "raid1", 00:27:03.271 "superblock": true, 00:27:03.271 "num_base_bdevs": 2, 00:27:03.271 "num_base_bdevs_discovered": 2, 00:27:03.271 "num_base_bdevs_operational": 2, 00:27:03.271 "base_bdevs_list": [ 00:27:03.271 { 00:27:03.271 "name": "spare", 00:27:03.271 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:27:03.271 "is_configured": true, 00:27:03.271 "data_offset": 2048, 00:27:03.271 "data_size": 63488 00:27:03.271 }, 00:27:03.271 { 00:27:03.271 "name": "BaseBdev2", 00:27:03.271 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:03.271 "is_configured": true, 00:27:03.271 "data_offset": 2048, 00:27:03.271 "data_size": 63488 00:27:03.271 } 00:27:03.271 ] 00:27:03.271 }' 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:03.271 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:03.530 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.530 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:03.530 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:03.530 02:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:03.789 [2024-07-11 02:32:54.164185] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.789 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.049 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.049 "name": "raid_bdev1", 00:27:04.049 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:04.049 "strip_size_kb": 0, 00:27:04.049 "state": "online", 00:27:04.049 "raid_level": "raid1", 00:27:04.049 "superblock": true, 00:27:04.049 "num_base_bdevs": 2, 00:27:04.049 "num_base_bdevs_discovered": 1, 00:27:04.049 "num_base_bdevs_operational": 1, 00:27:04.049 "base_bdevs_list": [ 00:27:04.049 { 00:27:04.049 "name": null, 00:27:04.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.049 "is_configured": false, 00:27:04.049 "data_offset": 2048, 00:27:04.049 "data_size": 63488 00:27:04.049 }, 00:27:04.049 { 00:27:04.049 "name": "BaseBdev2", 00:27:04.049 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:04.049 "is_configured": true, 00:27:04.049 "data_offset": 2048, 00:27:04.049 "data_size": 63488 00:27:04.049 } 00:27:04.049 ] 00:27:04.049 }' 00:27:04.049 02:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.049 02:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.987 02:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:05.246 [2024-07-11 02:32:55.463642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:05.246 [2024-07-11 02:32:55.463788] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:05.246 [2024-07-11 02:32:55.463806] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:05.246 [2024-07-11 02:32:55.463842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:05.246 [2024-07-11 02:32:55.468794] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x217d440 00:27:05.246 [2024-07-11 02:32:55.471060] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:05.246 02:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:06.183 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:06.183 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:06.183 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:06.183 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:06.183 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:06.183 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.183 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.442 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:06.442 "name": "raid_bdev1", 00:27:06.442 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:06.442 "strip_size_kb": 0, 00:27:06.442 "state": "online", 00:27:06.442 "raid_level": "raid1", 00:27:06.442 "superblock": true, 00:27:06.442 "num_base_bdevs": 2, 00:27:06.442 "num_base_bdevs_discovered": 2, 00:27:06.442 "num_base_bdevs_operational": 2, 00:27:06.442 "process": { 00:27:06.442 "type": "rebuild", 00:27:06.442 "target": "spare", 00:27:06.442 "progress": { 00:27:06.442 "blocks": 24576, 00:27:06.442 "percent": 38 00:27:06.442 } 00:27:06.442 }, 00:27:06.442 "base_bdevs_list": [ 00:27:06.442 { 00:27:06.442 "name": "spare", 00:27:06.442 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:27:06.442 "is_configured": true, 00:27:06.442 "data_offset": 2048, 00:27:06.442 "data_size": 63488 00:27:06.442 }, 00:27:06.442 { 00:27:06.442 "name": "BaseBdev2", 00:27:06.442 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:06.442 "is_configured": true, 00:27:06.442 "data_offset": 2048, 00:27:06.442 "data_size": 63488 00:27:06.442 } 00:27:06.442 ] 00:27:06.442 }' 00:27:06.442 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:06.442 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:06.442 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:06.442 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:06.442 02:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:06.701 [2024-07-11 02:32:57.061441] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:06.701 [2024-07-11 02:32:57.083175] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:06.701 [2024-07-11 02:32:57.083222] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.701 [2024-07-11 02:32:57.083242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:06.701 [2024-07-11 02:32:57.083254] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.701 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.960 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.960 "name": "raid_bdev1", 00:27:06.960 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:06.960 "strip_size_kb": 0, 00:27:06.960 "state": "online", 00:27:06.960 "raid_level": "raid1", 00:27:06.960 "superblock": true, 00:27:06.960 "num_base_bdevs": 2, 00:27:06.960 "num_base_bdevs_discovered": 1, 00:27:06.960 "num_base_bdevs_operational": 1, 00:27:06.960 "base_bdevs_list": [ 00:27:06.960 { 00:27:06.960 "name": null, 00:27:06.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.960 "is_configured": false, 00:27:06.960 "data_offset": 2048, 00:27:06.960 "data_size": 63488 00:27:06.960 }, 00:27:06.960 { 00:27:06.960 "name": "BaseBdev2", 00:27:06.960 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:06.960 "is_configured": true, 00:27:06.960 "data_offset": 2048, 00:27:06.960 "data_size": 63488 00:27:06.960 } 00:27:06.960 ] 00:27:06.960 }' 00:27:06.960 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.960 02:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.898 02:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:07.898 [2024-07-11 02:32:58.250600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:07.898 [2024-07-11 02:32:58.250653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.898 [2024-07-11 02:32:58.250684] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x232da20 00:27:07.898 [2024-07-11 02:32:58.250704] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.899 [2024-07-11 02:32:58.251168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.899 [2024-07-11 02:32:58.251198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:07.899 [2024-07-11 02:32:58.251299] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:07.899 [2024-07-11 02:32:58.251315] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:07.899 [2024-07-11 02:32:58.251328] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:07.899 [2024-07-11 02:32:58.251355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:07.899 [2024-07-11 02:32:58.256046] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x232a090 00:27:07.899 spare 00:27:07.899 [2024-07-11 02:32:58.257487] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:07.899 02:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:09.277 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:09.278 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:09.278 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:09.278 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:09.278 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:09.278 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.278 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.537 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:09.537 "name": "raid_bdev1", 00:27:09.537 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:09.537 "strip_size_kb": 0, 00:27:09.537 "state": "online", 00:27:09.537 "raid_level": "raid1", 00:27:09.537 "superblock": true, 00:27:09.537 "num_base_bdevs": 2, 00:27:09.537 "num_base_bdevs_discovered": 2, 00:27:09.537 "num_base_bdevs_operational": 2, 00:27:09.537 "process": { 00:27:09.537 "type": "rebuild", 00:27:09.537 "target": "spare", 00:27:09.537 "progress": { 00:27:09.537 "blocks": 28672, 00:27:09.537 "percent": 45 00:27:09.537 } 00:27:09.537 }, 00:27:09.537 "base_bdevs_list": [ 00:27:09.537 { 00:27:09.537 "name": "spare", 00:27:09.537 "uuid": "599cc1d0-bcf4-5202-a5b9-720a6cbf16da", 00:27:09.537 "is_configured": true, 00:27:09.537 "data_offset": 2048, 00:27:09.537 "data_size": 63488 00:27:09.537 }, 00:27:09.537 { 00:27:09.537 "name": "BaseBdev2", 00:27:09.537 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:09.537 "is_configured": true, 00:27:09.537 "data_offset": 2048, 00:27:09.537 "data_size": 63488 00:27:09.537 } 00:27:09.537 ] 00:27:09.537 }' 00:27:09.537 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:09.537 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.537 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:09.537 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.537 02:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:09.796 [2024-07-11 02:33:00.122021] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:09.796 [2024-07-11 02:33:00.172450] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:09.796 [2024-07-11 02:33:00.172496] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.796 [2024-07-11 02:33:00.172512] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:09.796 [2024-07-11 02:33:00.172521] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.796 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.055 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.055 "name": "raid_bdev1", 00:27:10.055 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:10.055 "strip_size_kb": 0, 00:27:10.055 "state": "online", 00:27:10.055 "raid_level": "raid1", 00:27:10.055 "superblock": true, 00:27:10.055 "num_base_bdevs": 2, 00:27:10.055 "num_base_bdevs_discovered": 1, 00:27:10.055 "num_base_bdevs_operational": 1, 00:27:10.055 "base_bdevs_list": [ 00:27:10.055 { 00:27:10.055 "name": null, 00:27:10.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.055 "is_configured": false, 00:27:10.055 "data_offset": 2048, 00:27:10.055 "data_size": 63488 00:27:10.055 }, 00:27:10.055 { 00:27:10.055 "name": "BaseBdev2", 00:27:10.055 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:10.055 "is_configured": true, 00:27:10.055 "data_offset": 2048, 00:27:10.055 "data_size": 63488 00:27:10.055 } 00:27:10.055 ] 00:27:10.055 }' 00:27:10.055 02:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.055 02:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:10.992 "name": "raid_bdev1", 00:27:10.992 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:10.992 "strip_size_kb": 0, 00:27:10.992 "state": "online", 00:27:10.992 "raid_level": "raid1", 00:27:10.992 "superblock": true, 00:27:10.992 "num_base_bdevs": 2, 00:27:10.992 "num_base_bdevs_discovered": 1, 00:27:10.992 "num_base_bdevs_operational": 1, 00:27:10.992 "base_bdevs_list": [ 00:27:10.992 { 00:27:10.992 "name": null, 00:27:10.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.992 "is_configured": false, 00:27:10.992 "data_offset": 2048, 00:27:10.992 "data_size": 63488 00:27:10.992 }, 00:27:10.992 { 00:27:10.992 "name": "BaseBdev2", 00:27:10.992 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:10.992 "is_configured": true, 00:27:10.992 "data_offset": 2048, 00:27:10.992 "data_size": 63488 00:27:10.992 } 00:27:10.992 ] 00:27:10.992 }' 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:10.992 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:11.251 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:11.510 [2024-07-11 02:33:01.733161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:11.510 [2024-07-11 02:33:01.733203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.510 [2024-07-11 02:33:01.733224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2330220 00:27:11.510 [2024-07-11 02:33:01.733236] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.510 [2024-07-11 02:33:01.733563] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.510 [2024-07-11 02:33:01.733583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:11.510 [2024-07-11 02:33:01.733640] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:11.510 [2024-07-11 02:33:01.733653] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:11.510 [2024-07-11 02:33:01.733664] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:11.510 BaseBdev1 00:27:11.510 02:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.447 02:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.706 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.706 "name": "raid_bdev1", 00:27:12.706 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:12.706 "strip_size_kb": 0, 00:27:12.706 "state": "online", 00:27:12.706 "raid_level": "raid1", 00:27:12.706 "superblock": true, 00:27:12.706 "num_base_bdevs": 2, 00:27:12.706 "num_base_bdevs_discovered": 1, 00:27:12.706 "num_base_bdevs_operational": 1, 00:27:12.707 "base_bdevs_list": [ 00:27:12.707 { 00:27:12.707 "name": null, 00:27:12.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.707 "is_configured": false, 00:27:12.707 "data_offset": 2048, 00:27:12.707 "data_size": 63488 00:27:12.707 }, 00:27:12.707 { 00:27:12.707 "name": "BaseBdev2", 00:27:12.707 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:12.707 "is_configured": true, 00:27:12.707 "data_offset": 2048, 00:27:12.707 "data_size": 63488 00:27:12.707 } 00:27:12.707 ] 00:27:12.707 }' 00:27:12.707 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.707 02:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.653 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:13.653 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:13.653 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:13.653 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:13.653 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:13.653 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.653 02:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:13.912 "name": "raid_bdev1", 00:27:13.912 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:13.912 "strip_size_kb": 0, 00:27:13.912 "state": "online", 00:27:13.912 "raid_level": "raid1", 00:27:13.912 "superblock": true, 00:27:13.912 "num_base_bdevs": 2, 00:27:13.912 "num_base_bdevs_discovered": 1, 00:27:13.912 "num_base_bdevs_operational": 1, 00:27:13.912 "base_bdevs_list": [ 00:27:13.912 { 00:27:13.912 "name": null, 00:27:13.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.912 "is_configured": false, 00:27:13.912 "data_offset": 2048, 00:27:13.912 "data_size": 63488 00:27:13.912 }, 00:27:13.912 { 00:27:13.912 "name": "BaseBdev2", 00:27:13.912 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:13.912 "is_configured": true, 00:27:13.912 "data_offset": 2048, 00:27:13.912 "data_size": 63488 00:27:13.912 } 00:27:13.912 ] 00:27:13.912 }' 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:13.912 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:14.479 [2024-07-11 02:33:04.785375] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.479 [2024-07-11 02:33:04.785493] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:14.479 [2024-07-11 02:33:04.785509] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:14.479 request: 00:27:14.479 { 00:27:14.479 "base_bdev": "BaseBdev1", 00:27:14.479 "raid_bdev": "raid_bdev1", 00:27:14.479 "method": "bdev_raid_add_base_bdev", 00:27:14.479 "req_id": 1 00:27:14.479 } 00:27:14.479 Got JSON-RPC error response 00:27:14.479 response: 00:27:14.479 { 00:27:14.479 "code": -22, 00:27:14.479 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:14.479 } 00:27:14.479 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:27:14.479 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:14.479 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:14.479 02:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:14.479 02:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.413 02:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.681 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.681 "name": "raid_bdev1", 00:27:15.681 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:15.681 "strip_size_kb": 0, 00:27:15.681 "state": "online", 00:27:15.681 "raid_level": "raid1", 00:27:15.681 "superblock": true, 00:27:15.681 "num_base_bdevs": 2, 00:27:15.681 "num_base_bdevs_discovered": 1, 00:27:15.681 "num_base_bdevs_operational": 1, 00:27:15.681 "base_bdevs_list": [ 00:27:15.681 { 00:27:15.681 "name": null, 00:27:15.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.681 "is_configured": false, 00:27:15.681 "data_offset": 2048, 00:27:15.681 "data_size": 63488 00:27:15.681 }, 00:27:15.681 { 00:27:15.681 "name": "BaseBdev2", 00:27:15.681 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:15.681 "is_configured": true, 00:27:15.681 "data_offset": 2048, 00:27:15.681 "data_size": 63488 00:27:15.681 } 00:27:15.681 ] 00:27:15.681 }' 00:27:15.681 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.681 02:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:16.618 "name": "raid_bdev1", 00:27:16.618 "uuid": "19f8526b-3505-431c-89ae-fcc2c44c14e3", 00:27:16.618 "strip_size_kb": 0, 00:27:16.618 "state": "online", 00:27:16.618 "raid_level": "raid1", 00:27:16.618 "superblock": true, 00:27:16.618 "num_base_bdevs": 2, 00:27:16.618 "num_base_bdevs_discovered": 1, 00:27:16.618 "num_base_bdevs_operational": 1, 00:27:16.618 "base_bdevs_list": [ 00:27:16.618 { 00:27:16.618 "name": null, 00:27:16.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.618 "is_configured": false, 00:27:16.618 "data_offset": 2048, 00:27:16.618 "data_size": 63488 00:27:16.618 }, 00:27:16.618 { 00:27:16.618 "name": "BaseBdev2", 00:27:16.618 "uuid": "1a4adbab-28c0-50dd-be7b-eb269f947dbd", 00:27:16.618 "is_configured": true, 00:27:16.618 "data_offset": 2048, 00:27:16.618 "data_size": 63488 00:27:16.618 } 00:27:16.618 ] 00:27:16.618 }' 00:27:16.618 02:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:16.618 02:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:16.618 02:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2010420 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2010420 ']' 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2010420 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2010420 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:16.877 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2010420' 00:27:16.877 killing process with pid 2010420 00:27:16.878 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2010420 00:27:16.878 Received shutdown signal, test time was about 60.000000 seconds 00:27:16.878 00:27:16.878 Latency(us) 00:27:16.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.878 =================================================================================================================== 00:27:16.878 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:16.878 [2024-07-11 02:33:07.143451] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:16.878 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2010420 00:27:16.878 [2024-07-11 02:33:07.143538] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:16.878 [2024-07-11 02:33:07.143579] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:16.878 [2024-07-11 02:33:07.143596] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232ca60 name raid_bdev1, state offline 00:27:16.878 [2024-07-11 02:33:07.174401] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:27:17.137 00:27:17.137 real 0m37.318s 00:27:17.137 user 0m54.234s 00:27:17.137 sys 0m6.987s 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:17.137 ************************************ 00:27:17.137 END TEST raid_rebuild_test_sb 00:27:17.137 ************************************ 00:27:17.137 02:33:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:17.137 02:33:07 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:27:17.137 02:33:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:17.137 02:33:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.137 02:33:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:17.137 ************************************ 00:27:17.137 START TEST raid_rebuild_test_io 00:27:17.137 ************************************ 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2016129 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2016129 /var/tmp/spdk-raid.sock 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2016129 ']' 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:17.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.137 02:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.137 [2024-07-11 02:33:07.528141] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:27:17.137 [2024-07-11 02:33:07.528209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016129 ] 00:27:17.137 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:17.137 Zero copy mechanism will not be used. 00:27:17.396 [2024-07-11 02:33:07.664574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.396 [2024-07-11 02:33:07.712154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.396 [2024-07-11 02:33:07.776932] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:17.396 [2024-07-11 02:33:07.776966] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:18.332 02:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.332 02:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:27:18.332 02:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:18.332 02:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:18.898 BaseBdev1_malloc 00:27:18.898 02:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:19.465 [2024-07-11 02:33:09.668206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:19.465 [2024-07-11 02:33:09.668256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.465 [2024-07-11 02:33:09.668280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1601ee0 00:27:19.465 [2024-07-11 02:33:09.668293] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.465 [2024-07-11 02:33:09.669981] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.465 [2024-07-11 02:33:09.670012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:19.465 BaseBdev1 00:27:19.465 02:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:19.465 02:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:20.063 BaseBdev2_malloc 00:27:20.063 02:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:20.359 [2024-07-11 02:33:10.708468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:20.359 [2024-07-11 02:33:10.708519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.359 [2024-07-11 02:33:10.708540] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1603870 00:27:20.359 [2024-07-11 02:33:10.708553] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.359 [2024-07-11 02:33:10.710124] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.359 [2024-07-11 02:33:10.710154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:20.359 BaseBdev2 00:27:20.359 02:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:20.927 spare_malloc 00:27:20.927 02:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:21.495 spare_delay 00:27:21.495 02:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:22.063 [2024-07-11 02:33:12.273308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:22.063 [2024-07-11 02:33:12.273354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.063 [2024-07-11 02:33:12.273375] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15fe1d0 00:27:22.063 [2024-07-11 02:33:12.273388] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.063 [2024-07-11 02:33:12.274952] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.064 [2024-07-11 02:33:12.274983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:22.064 spare 00:27:22.064 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:22.633 [2024-07-11 02:33:12.786663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.633 [2024-07-11 02:33:12.787995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:22.633 [2024-07-11 02:33:12.788072] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15fed30 00:27:22.633 [2024-07-11 02:33:12.788083] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:22.633 [2024-07-11 02:33:12.788292] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ef9d0 00:27:22.633 [2024-07-11 02:33:12.788432] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15fed30 00:27:22.633 [2024-07-11 02:33:12.788442] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15fed30 00:27:22.633 [2024-07-11 02:33:12.788552] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.633 02:33:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.892 02:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.892 "name": "raid_bdev1", 00:27:22.892 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:22.892 "strip_size_kb": 0, 00:27:22.892 "state": "online", 00:27:22.892 "raid_level": "raid1", 00:27:22.892 "superblock": false, 00:27:22.892 "num_base_bdevs": 2, 00:27:22.892 "num_base_bdevs_discovered": 2, 00:27:22.892 "num_base_bdevs_operational": 2, 00:27:22.892 "base_bdevs_list": [ 00:27:22.892 { 00:27:22.892 "name": "BaseBdev1", 00:27:22.892 "uuid": "03a2d832-4a37-59d9-ae6c-ffb7f7bbb48b", 00:27:22.892 "is_configured": true, 00:27:22.892 "data_offset": 0, 00:27:22.892 "data_size": 65536 00:27:22.892 }, 00:27:22.892 { 00:27:22.892 "name": "BaseBdev2", 00:27:22.892 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:22.892 "is_configured": true, 00:27:22.892 "data_offset": 0, 00:27:22.893 "data_size": 65536 00:27:22.893 } 00:27:22.893 ] 00:27:22.893 }' 00:27:22.893 02:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.893 02:33:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:23.824 02:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:23.824 02:33:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:23.824 [2024-07-11 02:33:14.170567] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:23.824 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:27:23.824 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.824 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:24.082 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:24.082 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:24.082 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:24.082 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:24.340 [2024-07-11 02:33:14.553632] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1600060 00:27:24.340 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:24.340 Zero copy mechanism will not be used. 00:27:24.340 Running I/O for 60 seconds... 00:27:24.340 [2024-07-11 02:33:14.671266] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:24.340 [2024-07-11 02:33:14.671442] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1600060 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.340 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.598 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.598 "name": "raid_bdev1", 00:27:24.598 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:24.598 "strip_size_kb": 0, 00:27:24.598 "state": "online", 00:27:24.598 "raid_level": "raid1", 00:27:24.598 "superblock": false, 00:27:24.598 "num_base_bdevs": 2, 00:27:24.598 "num_base_bdevs_discovered": 1, 00:27:24.598 "num_base_bdevs_operational": 1, 00:27:24.598 "base_bdevs_list": [ 00:27:24.598 { 00:27:24.598 "name": null, 00:27:24.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.598 "is_configured": false, 00:27:24.598 "data_offset": 0, 00:27:24.598 "data_size": 65536 00:27:24.598 }, 00:27:24.598 { 00:27:24.598 "name": "BaseBdev2", 00:27:24.598 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:24.598 "is_configured": true, 00:27:24.598 "data_offset": 0, 00:27:24.598 "data_size": 65536 00:27:24.598 } 00:27:24.598 ] 00:27:24.598 }' 00:27:24.598 02:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.598 02:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:25.534 02:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:25.534 [2024-07-11 02:33:15.827139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:25.534 02:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:25.534 [2024-07-11 02:33:15.902195] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15f18b0 00:27:25.534 [2024-07-11 02:33:15.904518] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:25.793 [2024-07-11 02:33:16.022202] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:25.793 [2024-07-11 02:33:16.022644] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:26.052 [2024-07-11 02:33:16.251634] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:26.052 [2024-07-11 02:33:16.251866] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:26.311 [2024-07-11 02:33:16.591288] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:26.570 [2024-07-11 02:33:16.794683] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:26.570 [2024-07-11 02:33:16.794862] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:26.570 02:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:26.570 02:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:26.570 02:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:26.570 02:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:26.570 02:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:26.570 02:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.570 02:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.828 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.828 "name": "raid_bdev1", 00:27:26.828 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:26.828 "strip_size_kb": 0, 00:27:26.828 "state": "online", 00:27:26.828 "raid_level": "raid1", 00:27:26.828 "superblock": false, 00:27:26.828 "num_base_bdevs": 2, 00:27:26.828 "num_base_bdevs_discovered": 2, 00:27:26.828 "num_base_bdevs_operational": 2, 00:27:26.828 "process": { 00:27:26.828 "type": "rebuild", 00:27:26.828 "target": "spare", 00:27:26.828 "progress": { 00:27:26.828 "blocks": 12288, 00:27:26.828 "percent": 18 00:27:26.828 } 00:27:26.828 }, 00:27:26.828 "base_bdevs_list": [ 00:27:26.828 { 00:27:26.828 "name": "spare", 00:27:26.828 "uuid": "bce5e5a7-f693-5ec9-878f-99b240724a25", 00:27:26.828 "is_configured": true, 00:27:26.828 "data_offset": 0, 00:27:26.828 "data_size": 65536 00:27:26.828 }, 00:27:26.828 { 00:27:26.828 "name": "BaseBdev2", 00:27:26.828 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:26.828 "is_configured": true, 00:27:26.828 "data_offset": 0, 00:27:26.828 "data_size": 65536 00:27:26.828 } 00:27:26.828 ] 00:27:26.828 }' 00:27:26.828 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.828 [2024-07-11 02:33:17.140721] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:26.828 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.828 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.828 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.828 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:27.087 [2024-07-11 02:33:17.373468] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:27.087 [2024-07-11 02:33:17.382825] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:27.087 [2024-07-11 02:33:17.445831] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:27.346 [2024-07-11 02:33:17.638897] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:27.346 [2024-07-11 02:33:17.648827] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.346 [2024-07-11 02:33:17.648855] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:27.346 [2024-07-11 02:33:17.648865] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:27.346 [2024-07-11 02:33:17.663157] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1600060 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.346 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.605 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:27.605 "name": "raid_bdev1", 00:27:27.605 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:27.605 "strip_size_kb": 0, 00:27:27.605 "state": "online", 00:27:27.605 "raid_level": "raid1", 00:27:27.605 "superblock": false, 00:27:27.605 "num_base_bdevs": 2, 00:27:27.605 "num_base_bdevs_discovered": 1, 00:27:27.605 "num_base_bdevs_operational": 1, 00:27:27.605 "base_bdevs_list": [ 00:27:27.605 { 00:27:27.605 "name": null, 00:27:27.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.605 "is_configured": false, 00:27:27.605 "data_offset": 0, 00:27:27.605 "data_size": 65536 00:27:27.605 }, 00:27:27.605 { 00:27:27.605 "name": "BaseBdev2", 00:27:27.605 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:27.605 "is_configured": true, 00:27:27.605 "data_offset": 0, 00:27:27.605 "data_size": 65536 00:27:27.605 } 00:27:27.605 ] 00:27:27.605 }' 00:27:27.605 02:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:27.605 02:33:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.541 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.542 "name": "raid_bdev1", 00:27:28.542 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:28.542 "strip_size_kb": 0, 00:27:28.542 "state": "online", 00:27:28.542 "raid_level": "raid1", 00:27:28.542 "superblock": false, 00:27:28.542 "num_base_bdevs": 2, 00:27:28.542 "num_base_bdevs_discovered": 1, 00:27:28.542 "num_base_bdevs_operational": 1, 00:27:28.542 "base_bdevs_list": [ 00:27:28.542 { 00:27:28.542 "name": null, 00:27:28.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.542 "is_configured": false, 00:27:28.542 "data_offset": 0, 00:27:28.542 "data_size": 65536 00:27:28.542 }, 00:27:28.542 { 00:27:28.542 "name": "BaseBdev2", 00:27:28.542 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:28.542 "is_configured": true, 00:27:28.542 "data_offset": 0, 00:27:28.542 "data_size": 65536 00:27:28.542 } 00:27:28.542 ] 00:27:28.542 }' 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:28.542 02:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:28.800 [2024-07-11 02:33:19.186156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:29.059 02:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:29.059 [2024-07-11 02:33:19.261255] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15f1870 00:27:29.059 [2024-07-11 02:33:19.262697] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:29.059 [2024-07-11 02:33:19.381159] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:29.059 [2024-07-11 02:33:19.381471] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:29.318 [2024-07-11 02:33:19.500014] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:29.319 [2024-07-11 02:33:19.500236] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:29.577 [2024-07-11 02:33:19.855726] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:29.577 [2024-07-11 02:33:19.856164] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:29.836 [2024-07-11 02:33:20.086846] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:29.836 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.836 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.836 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.836 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.836 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.836 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.836 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.095 [2024-07-11 02:33:20.326330] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:30.095 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.095 "name": "raid_bdev1", 00:27:30.095 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:30.095 "strip_size_kb": 0, 00:27:30.095 "state": "online", 00:27:30.095 "raid_level": "raid1", 00:27:30.095 "superblock": false, 00:27:30.095 "num_base_bdevs": 2, 00:27:30.095 "num_base_bdevs_discovered": 2, 00:27:30.095 "num_base_bdevs_operational": 2, 00:27:30.095 "process": { 00:27:30.095 "type": "rebuild", 00:27:30.095 "target": "spare", 00:27:30.095 "progress": { 00:27:30.095 "blocks": 14336, 00:27:30.095 "percent": 21 00:27:30.095 } 00:27:30.095 }, 00:27:30.095 "base_bdevs_list": [ 00:27:30.095 { 00:27:30.095 "name": "spare", 00:27:30.095 "uuid": "bce5e5a7-f693-5ec9-878f-99b240724a25", 00:27:30.095 "is_configured": true, 00:27:30.095 "data_offset": 0, 00:27:30.095 "data_size": 65536 00:27:30.095 }, 00:27:30.095 { 00:27:30.095 "name": "BaseBdev2", 00:27:30.095 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:30.095 "is_configured": true, 00:27:30.095 "data_offset": 0, 00:27:30.095 "data_size": 65536 00:27:30.095 } 00:27:30.095 ] 00:27:30.095 }' 00:27:30.095 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:30.095 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:30.095 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=853 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:30.354 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:30.355 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:30.355 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:30.355 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:30.355 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.355 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.355 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.355 "name": "raid_bdev1", 00:27:30.355 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:30.355 "strip_size_kb": 0, 00:27:30.355 "state": "online", 00:27:30.355 "raid_level": "raid1", 00:27:30.355 "superblock": false, 00:27:30.355 "num_base_bdevs": 2, 00:27:30.355 "num_base_bdevs_discovered": 2, 00:27:30.355 "num_base_bdevs_operational": 2, 00:27:30.355 "process": { 00:27:30.355 "type": "rebuild", 00:27:30.355 "target": "spare", 00:27:30.355 "progress": { 00:27:30.355 "blocks": 18432, 00:27:30.355 "percent": 28 00:27:30.355 } 00:27:30.355 }, 00:27:30.355 "base_bdevs_list": [ 00:27:30.355 { 00:27:30.355 "name": "spare", 00:27:30.355 "uuid": "bce5e5a7-f693-5ec9-878f-99b240724a25", 00:27:30.355 "is_configured": true, 00:27:30.355 "data_offset": 0, 00:27:30.355 "data_size": 65536 00:27:30.355 }, 00:27:30.355 { 00:27:30.355 "name": "BaseBdev2", 00:27:30.355 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:30.355 "is_configured": true, 00:27:30.355 "data_offset": 0, 00:27:30.355 "data_size": 65536 00:27:30.355 } 00:27:30.355 ] 00:27:30.355 }' 00:27:30.355 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:30.614 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:30.614 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:30.614 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:30.614 02:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:30.872 [2024-07-11 02:33:21.143274] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:31.441 [2024-07-11 02:33:21.811918] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:31.728 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:31.728 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.728 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.728 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:31.728 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:31.729 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.729 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.729 02:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.729 [2024-07-11 02:33:21.921840] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:31.729 02:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.729 "name": "raid_bdev1", 00:27:31.729 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:31.729 "strip_size_kb": 0, 00:27:31.729 "state": "online", 00:27:31.729 "raid_level": "raid1", 00:27:31.729 "superblock": false, 00:27:31.729 "num_base_bdevs": 2, 00:27:31.729 "num_base_bdevs_discovered": 2, 00:27:31.729 "num_base_bdevs_operational": 2, 00:27:31.729 "process": { 00:27:31.729 "type": "rebuild", 00:27:31.729 "target": "spare", 00:27:31.729 "progress": { 00:27:31.729 "blocks": 43008, 00:27:31.729 "percent": 65 00:27:31.729 } 00:27:31.729 }, 00:27:31.729 "base_bdevs_list": [ 00:27:31.729 { 00:27:31.729 "name": "spare", 00:27:31.729 "uuid": "bce5e5a7-f693-5ec9-878f-99b240724a25", 00:27:31.729 "is_configured": true, 00:27:31.729 "data_offset": 0, 00:27:31.729 "data_size": 65536 00:27:31.729 }, 00:27:31.729 { 00:27:31.729 "name": "BaseBdev2", 00:27:31.729 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:31.729 "is_configured": true, 00:27:31.729 "data_offset": 0, 00:27:31.729 "data_size": 65536 00:27:31.729 } 00:27:31.729 ] 00:27:31.729 }' 00:27:31.729 02:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:31.729 [2024-07-11 02:33:22.149951] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:31.988 02:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.988 02:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.988 02:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.988 02:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.924 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.924 [2024-07-11 02:33:23.273108] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:33.183 [2024-07-11 02:33:23.373358] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:33.183 [2024-07-11 02:33:23.375098] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:33.183 "name": "raid_bdev1", 00:27:33.183 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:33.183 "strip_size_kb": 0, 00:27:33.183 "state": "online", 00:27:33.183 "raid_level": "raid1", 00:27:33.183 "superblock": false, 00:27:33.183 "num_base_bdevs": 2, 00:27:33.183 "num_base_bdevs_discovered": 2, 00:27:33.183 "num_base_bdevs_operational": 2, 00:27:33.183 "base_bdevs_list": [ 00:27:33.183 { 00:27:33.183 "name": "spare", 00:27:33.183 "uuid": "bce5e5a7-f693-5ec9-878f-99b240724a25", 00:27:33.183 "is_configured": true, 00:27:33.183 "data_offset": 0, 00:27:33.183 "data_size": 65536 00:27:33.183 }, 00:27:33.183 { 00:27:33.183 "name": "BaseBdev2", 00:27:33.183 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:33.183 "is_configured": true, 00:27:33.183 "data_offset": 0, 00:27:33.183 "data_size": 65536 00:27:33.183 } 00:27:33.183 ] 00:27:33.183 }' 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.183 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.442 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:33.442 "name": "raid_bdev1", 00:27:33.442 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:33.442 "strip_size_kb": 0, 00:27:33.442 "state": "online", 00:27:33.442 "raid_level": "raid1", 00:27:33.442 "superblock": false, 00:27:33.442 "num_base_bdevs": 2, 00:27:33.442 "num_base_bdevs_discovered": 2, 00:27:33.442 "num_base_bdevs_operational": 2, 00:27:33.442 "base_bdevs_list": [ 00:27:33.442 { 00:27:33.442 "name": "spare", 00:27:33.442 "uuid": "bce5e5a7-f693-5ec9-878f-99b240724a25", 00:27:33.442 "is_configured": true, 00:27:33.442 "data_offset": 0, 00:27:33.442 "data_size": 65536 00:27:33.442 }, 00:27:33.442 { 00:27:33.442 "name": "BaseBdev2", 00:27:33.442 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:33.442 "is_configured": true, 00:27:33.442 "data_offset": 0, 00:27:33.442 "data_size": 65536 00:27:33.442 } 00:27:33.442 ] 00:27:33.442 }' 00:27:33.442 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:33.442 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:33.442 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.702 02:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.961 02:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.961 "name": "raid_bdev1", 00:27:33.961 "uuid": "a92b2167-9b05-4bf4-be8f-d2589b91a56f", 00:27:33.961 "strip_size_kb": 0, 00:27:33.961 "state": "online", 00:27:33.961 "raid_level": "raid1", 00:27:33.961 "superblock": false, 00:27:33.961 "num_base_bdevs": 2, 00:27:33.961 "num_base_bdevs_discovered": 2, 00:27:33.961 "num_base_bdevs_operational": 2, 00:27:33.961 "base_bdevs_list": [ 00:27:33.961 { 00:27:33.961 "name": "spare", 00:27:33.961 "uuid": "bce5e5a7-f693-5ec9-878f-99b240724a25", 00:27:33.961 "is_configured": true, 00:27:33.961 "data_offset": 0, 00:27:33.961 "data_size": 65536 00:27:33.961 }, 00:27:33.961 { 00:27:33.961 "name": "BaseBdev2", 00:27:33.961 "uuid": "46219658-71f1-55f1-971f-edf67d6dbc5f", 00:27:33.961 "is_configured": true, 00:27:33.961 "data_offset": 0, 00:27:33.961 "data_size": 65536 00:27:33.961 } 00:27:33.961 ] 00:27:33.961 }' 00:27:33.961 02:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.961 02:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:34.528 02:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:34.787 [2024-07-11 02:33:25.070319] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:34.787 [2024-07-11 02:33:25.070350] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:34.787 00:27:34.787 Latency(us) 00:27:34.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.787 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:34.787 raid_bdev1 : 10.57 97.93 293.79 0.00 0.00 13537.05 284.94 118534.68 00:27:34.788 =================================================================================================================== 00:27:34.788 Total : 97.93 293.79 0.00 0.00 13537.05 284.94 118534.68 00:27:34.788 [2024-07-11 02:33:25.154497] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.788 [2024-07-11 02:33:25.154524] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.788 [2024-07-11 02:33:25.154595] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.788 [2024-07-11 02:33:25.154607] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15fed30 name raid_bdev1, state offline 00:27:34.788 0 00:27:34.788 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.788 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.047 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:35.306 /dev/nbd0 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.306 1+0 records in 00:27:35.306 1+0 records out 00:27:35.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281593 s, 14.5 MB/s 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.306 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:35.565 /dev/nbd1 00:27:35.565 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:35.565 02:33:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:35.565 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:35.565 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:27:35.565 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:35.565 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:35.565 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:35.825 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:27:35.825 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:35.825 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:35.825 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.825 1+0 records in 00:27:35.825 1+0 records out 00:27:35.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287849 s, 14.2 MB/s 00:27:35.825 02:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:35.825 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:36.084 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2016129 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2016129 ']' 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2016129 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2016129 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2016129' 00:27:36.343 killing process with pid 2016129 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2016129 00:27:36.343 Received shutdown signal, test time was about 12.091626 seconds 00:27:36.343 00:27:36.343 Latency(us) 00:27:36.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.343 =================================================================================================================== 00:27:36.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.343 [2024-07-11 02:33:26.677118] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:36.343 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2016129 00:27:36.343 [2024-07-11 02:33:26.697800] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:27:36.602 00:27:36.602 real 0m19.429s 00:27:36.602 user 0m30.772s 00:27:36.602 sys 0m3.227s 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:36.602 ************************************ 00:27:36.602 END TEST raid_rebuild_test_io 00:27:36.602 ************************************ 00:27:36.602 02:33:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:36.602 02:33:26 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:27:36.602 02:33:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:36.602 02:33:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.602 02:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:36.602 ************************************ 00:27:36.602 START TEST raid_rebuild_test_sb_io 00:27:36.602 ************************************ 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2018900 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2018900 /var/tmp/spdk-raid.sock 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2018900 ']' 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:36.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.602 02:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:36.861 [2024-07-11 02:33:27.051736] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:27:36.861 [2024-07-11 02:33:27.051822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018900 ] 00:27:36.861 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:36.861 Zero copy mechanism will not be used. 00:27:36.861 [2024-07-11 02:33:27.190450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.861 [2024-07-11 02:33:27.243697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.119 [2024-07-11 02:33:27.308278] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.119 [2024-07-11 02:33:27.308311] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.688 02:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.688 02:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:27:37.688 02:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:37.688 02:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:37.946 BaseBdev1_malloc 00:27:37.946 02:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:38.205 [2024-07-11 02:33:28.503271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:38.205 [2024-07-11 02:33:28.503317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.205 [2024-07-11 02:33:28.503345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x193fee0 00:27:38.205 [2024-07-11 02:33:28.503359] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.205 [2024-07-11 02:33:28.505039] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.205 [2024-07-11 02:33:28.505068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:38.205 BaseBdev1 00:27:38.205 02:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:38.205 02:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:38.465 BaseBdev2_malloc 00:27:38.465 02:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:38.724 [2024-07-11 02:33:29.053421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:38.724 [2024-07-11 02:33:29.053467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.724 [2024-07-11 02:33:29.053488] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1941870 00:27:38.724 [2024-07-11 02:33:29.053501] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.724 [2024-07-11 02:33:29.055001] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.724 [2024-07-11 02:33:29.055030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:38.724 BaseBdev2 00:27:38.724 02:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:38.984 spare_malloc 00:27:38.984 02:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:39.243 spare_delay 00:27:39.243 02:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:39.502 [2024-07-11 02:33:29.852783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:39.502 [2024-07-11 02:33:29.852829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.502 [2024-07-11 02:33:29.852850] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x193c1d0 00:27:39.502 [2024-07-11 02:33:29.852862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.502 [2024-07-11 02:33:29.854425] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.502 [2024-07-11 02:33:29.854455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:39.502 spare 00:27:39.502 02:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:39.761 [2024-07-11 02:33:30.089433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:39.761 [2024-07-11 02:33:30.090733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:39.761 [2024-07-11 02:33:30.090895] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x193cd30 00:27:39.761 [2024-07-11 02:33:30.090909] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:39.761 [2024-07-11 02:33:30.091102] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193e6d0 00:27:39.761 [2024-07-11 02:33:30.091240] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x193cd30 00:27:39.761 [2024-07-11 02:33:30.091250] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x193cd30 00:27:39.761 [2024-07-11 02:33:30.091349] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.761 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.020 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.020 "name": "raid_bdev1", 00:27:40.020 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:40.020 "strip_size_kb": 0, 00:27:40.020 "state": "online", 00:27:40.020 "raid_level": "raid1", 00:27:40.020 "superblock": true, 00:27:40.020 "num_base_bdevs": 2, 00:27:40.020 "num_base_bdevs_discovered": 2, 00:27:40.020 "num_base_bdevs_operational": 2, 00:27:40.020 "base_bdevs_list": [ 00:27:40.020 { 00:27:40.020 "name": "BaseBdev1", 00:27:40.020 "uuid": "fd8a108e-cb19-5db6-8ca3-3092a778dcf6", 00:27:40.020 "is_configured": true, 00:27:40.020 "data_offset": 2048, 00:27:40.020 "data_size": 63488 00:27:40.020 }, 00:27:40.020 { 00:27:40.020 "name": "BaseBdev2", 00:27:40.020 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:40.020 "is_configured": true, 00:27:40.020 "data_offset": 2048, 00:27:40.020 "data_size": 63488 00:27:40.020 } 00:27:40.020 ] 00:27:40.020 }' 00:27:40.020 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.020 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:40.588 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:40.588 02:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:40.846 [2024-07-11 02:33:31.204614] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:40.846 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:40.846 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.847 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:41.105 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:41.105 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:41.105 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:41.105 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:41.364 [2024-07-11 02:33:31.563371] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193e580 00:27:41.364 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:41.364 Zero copy mechanism will not be used. 00:27:41.364 Running I/O for 60 seconds... 00:27:41.364 [2024-07-11 02:33:31.687696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:41.364 [2024-07-11 02:33:31.695856] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x193e580 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.364 02:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.623 02:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.623 "name": "raid_bdev1", 00:27:41.623 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:41.623 "strip_size_kb": 0, 00:27:41.623 "state": "online", 00:27:41.623 "raid_level": "raid1", 00:27:41.623 "superblock": true, 00:27:41.623 "num_base_bdevs": 2, 00:27:41.623 "num_base_bdevs_discovered": 1, 00:27:41.623 "num_base_bdevs_operational": 1, 00:27:41.623 "base_bdevs_list": [ 00:27:41.623 { 00:27:41.623 "name": null, 00:27:41.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.623 "is_configured": false, 00:27:41.623 "data_offset": 2048, 00:27:41.623 "data_size": 63488 00:27:41.623 }, 00:27:41.623 { 00:27:41.623 "name": "BaseBdev2", 00:27:41.623 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:41.623 "is_configured": true, 00:27:41.623 "data_offset": 2048, 00:27:41.623 "data_size": 63488 00:27:41.623 } 00:27:41.623 ] 00:27:41.623 }' 00:27:41.623 02:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.623 02:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:42.585 02:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:42.585 [2024-07-11 02:33:32.867361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:42.585 [2024-07-11 02:33:32.926672] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18252b0 00:27:42.585 [2024-07-11 02:33:32.929174] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:42.585 02:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:42.902 [2024-07-11 02:33:33.040257] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:42.902 [2024-07-11 02:33:33.040669] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:42.902 [2024-07-11 02:33:33.268693] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:42.902 [2024-07-11 02:33:33.268957] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:43.468 [2024-07-11 02:33:33.737272] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:43.468 [2024-07-11 02:33:33.737513] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:43.726 02:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:43.726 02:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:43.726 02:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:43.726 02:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:43.726 02:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:43.726 02:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.726 02:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.986 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.986 "name": "raid_bdev1", 00:27:43.986 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:43.986 "strip_size_kb": 0, 00:27:43.986 "state": "online", 00:27:43.986 "raid_level": "raid1", 00:27:43.986 "superblock": true, 00:27:43.986 "num_base_bdevs": 2, 00:27:43.986 "num_base_bdevs_discovered": 2, 00:27:43.986 "num_base_bdevs_operational": 2, 00:27:43.986 "process": { 00:27:43.986 "type": "rebuild", 00:27:43.986 "target": "spare", 00:27:43.986 "progress": { 00:27:43.986 "blocks": 14336, 00:27:43.986 "percent": 22 00:27:43.986 } 00:27:43.986 }, 00:27:43.986 "base_bdevs_list": [ 00:27:43.986 { 00:27:43.986 "name": "spare", 00:27:43.986 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:43.986 "is_configured": true, 00:27:43.986 "data_offset": 2048, 00:27:43.986 "data_size": 63488 00:27:43.986 }, 00:27:43.986 { 00:27:43.986 "name": "BaseBdev2", 00:27:43.986 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:43.986 "is_configured": true, 00:27:43.986 "data_offset": 2048, 00:27:43.986 "data_size": 63488 00:27:43.986 } 00:27:43.986 ] 00:27:43.986 }' 00:27:43.986 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:43.986 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:43.986 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:43.986 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:43.986 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:44.245 [2024-07-11 02:33:34.423822] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:44.245 [2024-07-11 02:33:34.432374] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:44.245 [2024-07-11 02:33:34.605210] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:44.245 [2024-07-11 02:33:34.661395] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:44.503 [2024-07-11 02:33:34.748864] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:44.503 [2024-07-11 02:33:34.759155] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.503 [2024-07-11 02:33:34.759182] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:44.503 [2024-07-11 02:33:34.759193] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:44.503 [2024-07-11 02:33:34.781699] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x193e580 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.503 02:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.762 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:44.762 "name": "raid_bdev1", 00:27:44.762 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:44.762 "strip_size_kb": 0, 00:27:44.762 "state": "online", 00:27:44.762 "raid_level": "raid1", 00:27:44.762 "superblock": true, 00:27:44.762 "num_base_bdevs": 2, 00:27:44.762 "num_base_bdevs_discovered": 1, 00:27:44.762 "num_base_bdevs_operational": 1, 00:27:44.762 "base_bdevs_list": [ 00:27:44.762 { 00:27:44.762 "name": null, 00:27:44.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.762 "is_configured": false, 00:27:44.762 "data_offset": 2048, 00:27:44.762 "data_size": 63488 00:27:44.762 }, 00:27:44.762 { 00:27:44.762 "name": "BaseBdev2", 00:27:44.762 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:44.762 "is_configured": true, 00:27:44.762 "data_offset": 2048, 00:27:44.762 "data_size": 63488 00:27:44.762 } 00:27:44.762 ] 00:27:44.762 }' 00:27:44.762 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:44.762 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:45.330 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:45.330 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.330 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:45.330 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:45.330 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.330 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.330 02:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.899 02:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.899 "name": "raid_bdev1", 00:27:45.899 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:45.899 "strip_size_kb": 0, 00:27:45.899 "state": "online", 00:27:45.899 "raid_level": "raid1", 00:27:45.899 "superblock": true, 00:27:45.899 "num_base_bdevs": 2, 00:27:45.899 "num_base_bdevs_discovered": 1, 00:27:45.899 "num_base_bdevs_operational": 1, 00:27:45.899 "base_bdevs_list": [ 00:27:45.899 { 00:27:45.899 "name": null, 00:27:45.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.899 "is_configured": false, 00:27:45.899 "data_offset": 2048, 00:27:45.899 "data_size": 63488 00:27:45.899 }, 00:27:45.899 { 00:27:45.899 "name": "BaseBdev2", 00:27:45.899 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:45.899 "is_configured": true, 00:27:45.899 "data_offset": 2048, 00:27:45.899 "data_size": 63488 00:27:45.899 } 00:27:45.899 ] 00:27:45.899 }' 00:27:45.899 02:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.899 02:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:45.899 02:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.899 02:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:45.899 02:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:46.468 [2024-07-11 02:33:36.599038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:46.468 02:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:46.468 [2024-07-11 02:33:36.675358] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1825360 00:27:46.468 [2024-07-11 02:33:36.676844] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:46.468 [2024-07-11 02:33:36.807494] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:46.727 [2024-07-11 02:33:37.048203] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:46.986 [2024-07-11 02:33:37.313257] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:46.986 [2024-07-11 02:33:37.313584] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:47.245 [2024-07-11 02:33:37.523277] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:47.245 [2024-07-11 02:33:37.523478] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:47.504 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.504 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.504 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.504 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.504 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.504 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.505 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.505 [2024-07-11 02:33:37.845153] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:47.764 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.764 "name": "raid_bdev1", 00:27:47.764 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:47.764 "strip_size_kb": 0, 00:27:47.764 "state": "online", 00:27:47.764 "raid_level": "raid1", 00:27:47.764 "superblock": true, 00:27:47.764 "num_base_bdevs": 2, 00:27:47.764 "num_base_bdevs_discovered": 2, 00:27:47.764 "num_base_bdevs_operational": 2, 00:27:47.764 "process": { 00:27:47.764 "type": "rebuild", 00:27:47.764 "target": "spare", 00:27:47.764 "progress": { 00:27:47.764 "blocks": 14336, 00:27:47.764 "percent": 22 00:27:47.764 } 00:27:47.764 }, 00:27:47.764 "base_bdevs_list": [ 00:27:47.764 { 00:27:47.764 "name": "spare", 00:27:47.764 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:47.764 "is_configured": true, 00:27:47.764 "data_offset": 2048, 00:27:47.764 "data_size": 63488 00:27:47.764 }, 00:27:47.764 { 00:27:47.764 "name": "BaseBdev2", 00:27:47.764 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:47.764 "is_configured": true, 00:27:47.764 "data_offset": 2048, 00:27:47.764 "data_size": 63488 00:27:47.764 } 00:27:47.764 ] 00:27:47.764 }' 00:27:47.764 02:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.764 [2024-07-11 02:33:38.080126] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:47.764 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=871 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.764 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.023 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.023 "name": "raid_bdev1", 00:27:48.023 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:48.023 "strip_size_kb": 0, 00:27:48.023 "state": "online", 00:27:48.023 "raid_level": "raid1", 00:27:48.023 "superblock": true, 00:27:48.023 "num_base_bdevs": 2, 00:27:48.023 "num_base_bdevs_discovered": 2, 00:27:48.023 "num_base_bdevs_operational": 2, 00:27:48.023 "process": { 00:27:48.023 "type": "rebuild", 00:27:48.023 "target": "spare", 00:27:48.023 "progress": { 00:27:48.023 "blocks": 18432, 00:27:48.023 "percent": 29 00:27:48.023 } 00:27:48.023 }, 00:27:48.023 "base_bdevs_list": [ 00:27:48.023 { 00:27:48.023 "name": "spare", 00:27:48.023 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:48.023 "is_configured": true, 00:27:48.023 "data_offset": 2048, 00:27:48.023 "data_size": 63488 00:27:48.023 }, 00:27:48.023 { 00:27:48.023 "name": "BaseBdev2", 00:27:48.023 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:48.023 "is_configured": true, 00:27:48.023 "data_offset": 2048, 00:27:48.023 "data_size": 63488 00:27:48.023 } 00:27:48.023 ] 00:27:48.023 }' 00:27:48.023 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.282 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:48.282 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.282 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:48.282 02:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:48.541 [2024-07-11 02:33:38.740068] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:48.800 [2024-07-11 02:33:39.085623] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:48.800 [2024-07-11 02:33:39.085910] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:49.059 [2024-07-11 02:33:39.442611] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.318 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.577 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.577 "name": "raid_bdev1", 00:27:49.577 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:49.577 "strip_size_kb": 0, 00:27:49.577 "state": "online", 00:27:49.577 "raid_level": "raid1", 00:27:49.577 "superblock": true, 00:27:49.577 "num_base_bdevs": 2, 00:27:49.577 "num_base_bdevs_discovered": 2, 00:27:49.577 "num_base_bdevs_operational": 2, 00:27:49.577 "process": { 00:27:49.577 "type": "rebuild", 00:27:49.577 "target": "spare", 00:27:49.577 "progress": { 00:27:49.577 "blocks": 43008, 00:27:49.577 "percent": 67 00:27:49.577 } 00:27:49.577 }, 00:27:49.577 "base_bdevs_list": [ 00:27:49.577 { 00:27:49.577 "name": "spare", 00:27:49.577 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:49.577 "is_configured": true, 00:27:49.577 "data_offset": 2048, 00:27:49.577 "data_size": 63488 00:27:49.577 }, 00:27:49.577 { 00:27:49.577 "name": "BaseBdev2", 00:27:49.577 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:49.577 "is_configured": true, 00:27:49.577 "data_offset": 2048, 00:27:49.577 "data_size": 63488 00:27:49.577 } 00:27:49.577 ] 00:27:49.577 }' 00:27:49.577 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.577 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.577 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.577 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.577 02:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:50.146 [2024-07-11 02:33:40.332306] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:27:50.146 [2024-07-11 02:33:40.548770] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:27:50.714 [2024-07-11 02:33:40.870437] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:50.714 [2024-07-11 02:33:40.880108] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:50.714 [2024-07-11 02:33:40.890248] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.714 02:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.973 "name": "raid_bdev1", 00:27:50.973 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:50.973 "strip_size_kb": 0, 00:27:50.973 "state": "online", 00:27:50.973 "raid_level": "raid1", 00:27:50.973 "superblock": true, 00:27:50.973 "num_base_bdevs": 2, 00:27:50.973 "num_base_bdevs_discovered": 2, 00:27:50.973 "num_base_bdevs_operational": 2, 00:27:50.973 "base_bdevs_list": [ 00:27:50.973 { 00:27:50.973 "name": "spare", 00:27:50.973 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:50.973 "is_configured": true, 00:27:50.973 "data_offset": 2048, 00:27:50.973 "data_size": 63488 00:27:50.973 }, 00:27:50.973 { 00:27:50.973 "name": "BaseBdev2", 00:27:50.973 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:50.973 "is_configured": true, 00:27:50.973 "data_offset": 2048, 00:27:50.973 "data_size": 63488 00:27:50.973 } 00:27:50.973 ] 00:27:50.973 }' 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.973 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.542 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:51.542 "name": "raid_bdev1", 00:27:51.542 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:51.542 "strip_size_kb": 0, 00:27:51.542 "state": "online", 00:27:51.542 "raid_level": "raid1", 00:27:51.542 "superblock": true, 00:27:51.542 "num_base_bdevs": 2, 00:27:51.542 "num_base_bdevs_discovered": 2, 00:27:51.542 "num_base_bdevs_operational": 2, 00:27:51.542 "base_bdevs_list": [ 00:27:51.542 { 00:27:51.542 "name": "spare", 00:27:51.542 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:51.542 "is_configured": true, 00:27:51.542 "data_offset": 2048, 00:27:51.542 "data_size": 63488 00:27:51.542 }, 00:27:51.542 { 00:27:51.542 "name": "BaseBdev2", 00:27:51.542 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:51.542 "is_configured": true, 00:27:51.542 "data_offset": 2048, 00:27:51.542 "data_size": 63488 00:27:51.542 } 00:27:51.542 ] 00:27:51.542 }' 00:27:51.542 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:51.542 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:51.542 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:51.542 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:51.542 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:51.542 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.801 02:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.801 02:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.801 "name": "raid_bdev1", 00:27:51.801 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:51.801 "strip_size_kb": 0, 00:27:51.801 "state": "online", 00:27:51.801 "raid_level": "raid1", 00:27:51.801 "superblock": true, 00:27:51.801 "num_base_bdevs": 2, 00:27:51.801 "num_base_bdevs_discovered": 2, 00:27:51.801 "num_base_bdevs_operational": 2, 00:27:51.801 "base_bdevs_list": [ 00:27:51.801 { 00:27:51.801 "name": "spare", 00:27:51.801 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:51.801 "is_configured": true, 00:27:51.801 "data_offset": 2048, 00:27:51.801 "data_size": 63488 00:27:51.801 }, 00:27:51.801 { 00:27:51.801 "name": "BaseBdev2", 00:27:51.801 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:51.801 "is_configured": true, 00:27:51.801 "data_offset": 2048, 00:27:51.801 "data_size": 63488 00:27:51.801 } 00:27:51.801 ] 00:27:51.801 }' 00:27:51.801 02:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.801 02:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:52.736 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:52.994 [2024-07-11 02:33:43.324866] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:52.994 [2024-07-11 02:33:43.324900] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:53.252 00:27:53.252 Latency(us) 00:27:53.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.252 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:53.252 raid_bdev1 : 11.83 98.98 296.94 0.00 0.00 13562.88 286.72 110784.33 00:27:53.252 =================================================================================================================== 00:27:53.252 Total : 98.98 296.94 0.00 0.00 13562.88 286.72 110784.33 00:27:53.252 [2024-07-11 02:33:43.429070] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.252 [2024-07-11 02:33:43.429100] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:53.252 [2024-07-11 02:33:43.429173] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:53.252 [2024-07-11 02:33:43.429184] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x193cd30 name raid_bdev1, state offline 00:27:53.252 0 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:53.252 02:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:53.816 /dev/nbd0 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:53.816 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:53.816 1+0 records in 00:27:53.816 1+0 records out 00:27:53.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285763 s, 14.3 MB/s 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:53.817 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:54.075 /dev/nbd1 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:54.075 1+0 records in 00:27:54.075 1+0 records out 00:27:54.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277384 s, 14.8 MB/s 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:54.075 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:54.332 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:54.332 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:54.332 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:54.332 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:54.332 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:54.332 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:54.332 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:54.590 02:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:54.847 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:54.847 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:54.847 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:54.847 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:54.847 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:54.848 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:54.848 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:54.848 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:54.848 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:54.848 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:55.106 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:55.364 [2024-07-11 02:33:45.571704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:55.364 [2024-07-11 02:33:45.571749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:55.364 [2024-07-11 02:33:45.571775] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x193ca10 00:27:55.364 [2024-07-11 02:33:45.571788] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:55.364 [2024-07-11 02:33:45.573381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:55.364 [2024-07-11 02:33:45.573411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:55.364 [2024-07-11 02:33:45.573485] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:55.364 [2024-07-11 02:33:45.573517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:55.364 [2024-07-11 02:33:45.573617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:55.364 spare 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.364 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.364 [2024-07-11 02:33:45.673933] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x193b490 00:27:55.364 [2024-07-11 02:33:45.673950] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:55.364 [2024-07-11 02:33:45.674134] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17a1f70 00:27:55.364 [2024-07-11 02:33:45.674274] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x193b490 00:27:55.364 [2024-07-11 02:33:45.674284] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x193b490 00:27:55.364 [2024-07-11 02:33:45.674386] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:55.623 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.623 "name": "raid_bdev1", 00:27:55.623 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:55.623 "strip_size_kb": 0, 00:27:55.623 "state": "online", 00:27:55.623 "raid_level": "raid1", 00:27:55.623 "superblock": true, 00:27:55.623 "num_base_bdevs": 2, 00:27:55.623 "num_base_bdevs_discovered": 2, 00:27:55.623 "num_base_bdevs_operational": 2, 00:27:55.623 "base_bdevs_list": [ 00:27:55.623 { 00:27:55.623 "name": "spare", 00:27:55.623 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:55.623 "is_configured": true, 00:27:55.623 "data_offset": 2048, 00:27:55.623 "data_size": 63488 00:27:55.623 }, 00:27:55.623 { 00:27:55.623 "name": "BaseBdev2", 00:27:55.623 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:55.623 "is_configured": true, 00:27:55.623 "data_offset": 2048, 00:27:55.623 "data_size": 63488 00:27:55.623 } 00:27:55.623 ] 00:27:55.623 }' 00:27:55.623 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.623 02:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:56.190 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:56.190 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.190 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:56.190 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:56.190 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.190 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.190 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.449 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.449 "name": "raid_bdev1", 00:27:56.449 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:56.449 "strip_size_kb": 0, 00:27:56.449 "state": "online", 00:27:56.449 "raid_level": "raid1", 00:27:56.449 "superblock": true, 00:27:56.449 "num_base_bdevs": 2, 00:27:56.449 "num_base_bdevs_discovered": 2, 00:27:56.449 "num_base_bdevs_operational": 2, 00:27:56.449 "base_bdevs_list": [ 00:27:56.449 { 00:27:56.449 "name": "spare", 00:27:56.449 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:56.449 "is_configured": true, 00:27:56.449 "data_offset": 2048, 00:27:56.449 "data_size": 63488 00:27:56.449 }, 00:27:56.449 { 00:27:56.449 "name": "BaseBdev2", 00:27:56.449 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:56.449 "is_configured": true, 00:27:56.449 "data_offset": 2048, 00:27:56.449 "data_size": 63488 00:27:56.449 } 00:27:56.449 ] 00:27:56.449 }' 00:27:56.449 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:56.449 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:56.449 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:56.449 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:56.449 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.449 02:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:56.708 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:56.708 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:56.967 [2024-07-11 02:33:47.192363] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.967 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.225 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.225 "name": "raid_bdev1", 00:27:57.225 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:57.225 "strip_size_kb": 0, 00:27:57.225 "state": "online", 00:27:57.225 "raid_level": "raid1", 00:27:57.225 "superblock": true, 00:27:57.225 "num_base_bdevs": 2, 00:27:57.225 "num_base_bdevs_discovered": 1, 00:27:57.225 "num_base_bdevs_operational": 1, 00:27:57.225 "base_bdevs_list": [ 00:27:57.225 { 00:27:57.225 "name": null, 00:27:57.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.225 "is_configured": false, 00:27:57.225 "data_offset": 2048, 00:27:57.225 "data_size": 63488 00:27:57.225 }, 00:27:57.225 { 00:27:57.226 "name": "BaseBdev2", 00:27:57.226 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:57.226 "is_configured": true, 00:27:57.226 "data_offset": 2048, 00:27:57.226 "data_size": 63488 00:27:57.226 } 00:27:57.226 ] 00:27:57.226 }' 00:27:57.226 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.226 02:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:57.793 02:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:58.051 [2024-07-11 02:33:48.327523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:58.051 [2024-07-11 02:33:48.327659] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:58.051 [2024-07-11 02:33:48.327676] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:58.051 [2024-07-11 02:33:48.327702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:58.051 [2024-07-11 02:33:48.332787] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17a4760 00:27:58.051 [2024-07-11 02:33:48.334991] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:58.051 02:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:58.987 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:58.987 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:58.987 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:58.987 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:58.987 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:58.987 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.987 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.246 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:59.246 "name": "raid_bdev1", 00:27:59.246 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:27:59.246 "strip_size_kb": 0, 00:27:59.246 "state": "online", 00:27:59.246 "raid_level": "raid1", 00:27:59.246 "superblock": true, 00:27:59.246 "num_base_bdevs": 2, 00:27:59.246 "num_base_bdevs_discovered": 2, 00:27:59.246 "num_base_bdevs_operational": 2, 00:27:59.246 "process": { 00:27:59.246 "type": "rebuild", 00:27:59.246 "target": "spare", 00:27:59.246 "progress": { 00:27:59.246 "blocks": 24576, 00:27:59.246 "percent": 38 00:27:59.246 } 00:27:59.246 }, 00:27:59.246 "base_bdevs_list": [ 00:27:59.246 { 00:27:59.246 "name": "spare", 00:27:59.246 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:27:59.246 "is_configured": true, 00:27:59.246 "data_offset": 2048, 00:27:59.246 "data_size": 63488 00:27:59.246 }, 00:27:59.246 { 00:27:59.246 "name": "BaseBdev2", 00:27:59.246 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:27:59.246 "is_configured": true, 00:27:59.246 "data_offset": 2048, 00:27:59.246 "data_size": 63488 00:27:59.246 } 00:27:59.246 ] 00:27:59.246 }' 00:27:59.246 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:59.246 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:59.246 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.504 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:59.504 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:59.505 [2024-07-11 02:33:49.922147] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.763 [2024-07-11 02:33:49.947446] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:59.763 [2024-07-11 02:33:49.947493] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.763 [2024-07-11 02:33:49.947509] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.763 [2024-07-11 02:33:49.947517] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.763 02:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.023 02:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.023 "name": "raid_bdev1", 00:28:00.023 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:00.023 "strip_size_kb": 0, 00:28:00.023 "state": "online", 00:28:00.023 "raid_level": "raid1", 00:28:00.023 "superblock": true, 00:28:00.023 "num_base_bdevs": 2, 00:28:00.023 "num_base_bdevs_discovered": 1, 00:28:00.023 "num_base_bdevs_operational": 1, 00:28:00.023 "base_bdevs_list": [ 00:28:00.023 { 00:28:00.023 "name": null, 00:28:00.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.023 "is_configured": false, 00:28:00.023 "data_offset": 2048, 00:28:00.023 "data_size": 63488 00:28:00.023 }, 00:28:00.023 { 00:28:00.023 "name": "BaseBdev2", 00:28:00.023 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:00.023 "is_configured": true, 00:28:00.023 "data_offset": 2048, 00:28:00.023 "data_size": 63488 00:28:00.023 } 00:28:00.023 ] 00:28:00.023 }' 00:28:00.023 02:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.023 02:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:00.591 02:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:00.850 [2024-07-11 02:33:51.026998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:00.850 [2024-07-11 02:33:51.027046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.850 [2024-07-11 02:33:51.027069] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x178cdd0 00:28:00.850 [2024-07-11 02:33:51.027081] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.850 [2024-07-11 02:33:51.027434] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.850 [2024-07-11 02:33:51.027452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:00.850 [2024-07-11 02:33:51.027533] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:00.850 [2024-07-11 02:33:51.027545] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:00.850 [2024-07-11 02:33:51.027556] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:00.850 [2024-07-11 02:33:51.027574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:00.850 [2024-07-11 02:33:51.032724] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x178d060 00:28:00.850 spare 00:28:00.850 [2024-07-11 02:33:51.034146] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:00.850 02:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:01.787 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:01.787 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:01.787 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:01.787 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:01.787 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:01.787 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.787 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.046 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.046 "name": "raid_bdev1", 00:28:02.046 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:02.046 "strip_size_kb": 0, 00:28:02.046 "state": "online", 00:28:02.046 "raid_level": "raid1", 00:28:02.046 "superblock": true, 00:28:02.046 "num_base_bdevs": 2, 00:28:02.046 "num_base_bdevs_discovered": 2, 00:28:02.046 "num_base_bdevs_operational": 2, 00:28:02.046 "process": { 00:28:02.046 "type": "rebuild", 00:28:02.046 "target": "spare", 00:28:02.046 "progress": { 00:28:02.046 "blocks": 24576, 00:28:02.047 "percent": 38 00:28:02.047 } 00:28:02.047 }, 00:28:02.047 "base_bdevs_list": [ 00:28:02.047 { 00:28:02.047 "name": "spare", 00:28:02.047 "uuid": "1f2427aa-132c-576b-ba1d-06d86fdff682", 00:28:02.047 "is_configured": true, 00:28:02.047 "data_offset": 2048, 00:28:02.047 "data_size": 63488 00:28:02.047 }, 00:28:02.047 { 00:28:02.047 "name": "BaseBdev2", 00:28:02.047 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:02.047 "is_configured": true, 00:28:02.047 "data_offset": 2048, 00:28:02.047 "data_size": 63488 00:28:02.047 } 00:28:02.047 ] 00:28:02.047 }' 00:28:02.047 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.047 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:02.047 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.047 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.047 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:02.305 [2024-07-11 02:33:52.625690] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:02.305 [2024-07-11 02:33:52.646549] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:02.305 [2024-07-11 02:33:52.646593] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.305 [2024-07-11 02:33:52.646608] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:02.305 [2024-07-11 02:33:52.646616] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.305 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.563 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.563 "name": "raid_bdev1", 00:28:02.563 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:02.563 "strip_size_kb": 0, 00:28:02.563 "state": "online", 00:28:02.564 "raid_level": "raid1", 00:28:02.564 "superblock": true, 00:28:02.564 "num_base_bdevs": 2, 00:28:02.564 "num_base_bdevs_discovered": 1, 00:28:02.564 "num_base_bdevs_operational": 1, 00:28:02.564 "base_bdevs_list": [ 00:28:02.564 { 00:28:02.564 "name": null, 00:28:02.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.564 "is_configured": false, 00:28:02.564 "data_offset": 2048, 00:28:02.564 "data_size": 63488 00:28:02.564 }, 00:28:02.564 { 00:28:02.564 "name": "BaseBdev2", 00:28:02.564 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:02.564 "is_configured": true, 00:28:02.564 "data_offset": 2048, 00:28:02.564 "data_size": 63488 00:28:02.564 } 00:28:02.564 ] 00:28:02.564 }' 00:28:02.564 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.564 02:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:03.132 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:03.132 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:03.132 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:03.132 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:03.132 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:03.132 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.132 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.390 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:03.390 "name": "raid_bdev1", 00:28:03.390 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:03.390 "strip_size_kb": 0, 00:28:03.390 "state": "online", 00:28:03.390 "raid_level": "raid1", 00:28:03.390 "superblock": true, 00:28:03.390 "num_base_bdevs": 2, 00:28:03.390 "num_base_bdevs_discovered": 1, 00:28:03.390 "num_base_bdevs_operational": 1, 00:28:03.390 "base_bdevs_list": [ 00:28:03.390 { 00:28:03.390 "name": null, 00:28:03.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.390 "is_configured": false, 00:28:03.390 "data_offset": 2048, 00:28:03.390 "data_size": 63488 00:28:03.390 }, 00:28:03.390 { 00:28:03.390 "name": "BaseBdev2", 00:28:03.390 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:03.390 "is_configured": true, 00:28:03.390 "data_offset": 2048, 00:28:03.390 "data_size": 63488 00:28:03.390 } 00:28:03.390 ] 00:28:03.390 }' 00:28:03.390 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:03.648 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:03.648 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:03.648 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:03.648 02:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:03.907 02:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:03.907 [2024-07-11 02:33:54.319900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:03.907 [2024-07-11 02:33:54.319949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:03.907 [2024-07-11 02:33:54.319970] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1940220 00:28:03.907 [2024-07-11 02:33:54.319983] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:03.907 [2024-07-11 02:33:54.320298] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:03.907 [2024-07-11 02:33:54.320315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:03.907 [2024-07-11 02:33:54.320376] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:03.907 [2024-07-11 02:33:54.320388] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:03.907 [2024-07-11 02:33:54.320399] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:03.907 BaseBdev1 00:28:04.197 02:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.145 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.404 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.404 "name": "raid_bdev1", 00:28:05.404 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:05.404 "strip_size_kb": 0, 00:28:05.404 "state": "online", 00:28:05.404 "raid_level": "raid1", 00:28:05.404 "superblock": true, 00:28:05.404 "num_base_bdevs": 2, 00:28:05.404 "num_base_bdevs_discovered": 1, 00:28:05.404 "num_base_bdevs_operational": 1, 00:28:05.404 "base_bdevs_list": [ 00:28:05.404 { 00:28:05.404 "name": null, 00:28:05.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.404 "is_configured": false, 00:28:05.404 "data_offset": 2048, 00:28:05.404 "data_size": 63488 00:28:05.404 }, 00:28:05.404 { 00:28:05.404 "name": "BaseBdev2", 00:28:05.404 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:05.404 "is_configured": true, 00:28:05.404 "data_offset": 2048, 00:28:05.404 "data_size": 63488 00:28:05.404 } 00:28:05.404 ] 00:28:05.404 }' 00:28:05.404 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.404 02:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:05.970 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:05.970 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:05.970 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:05.970 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:05.970 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:05.970 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.970 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:06.229 "name": "raid_bdev1", 00:28:06.229 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:06.229 "strip_size_kb": 0, 00:28:06.229 "state": "online", 00:28:06.229 "raid_level": "raid1", 00:28:06.229 "superblock": true, 00:28:06.229 "num_base_bdevs": 2, 00:28:06.229 "num_base_bdevs_discovered": 1, 00:28:06.229 "num_base_bdevs_operational": 1, 00:28:06.229 "base_bdevs_list": [ 00:28:06.229 { 00:28:06.229 "name": null, 00:28:06.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.229 "is_configured": false, 00:28:06.229 "data_offset": 2048, 00:28:06.229 "data_size": 63488 00:28:06.229 }, 00:28:06.229 { 00:28:06.229 "name": "BaseBdev2", 00:28:06.229 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:06.229 "is_configured": true, 00:28:06.229 "data_offset": 2048, 00:28:06.229 "data_size": 63488 00:28:06.229 } 00:28:06.229 ] 00:28:06.229 }' 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:06.229 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:06.487 [2024-07-11 02:33:56.742773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:06.487 [2024-07-11 02:33:56.742886] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:06.487 [2024-07-11 02:33:56.742901] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:06.487 request: 00:28:06.487 { 00:28:06.487 "base_bdev": "BaseBdev1", 00:28:06.487 "raid_bdev": "raid_bdev1", 00:28:06.487 "method": "bdev_raid_add_base_bdev", 00:28:06.487 "req_id": 1 00:28:06.487 } 00:28:06.487 Got JSON-RPC error response 00:28:06.487 response: 00:28:06.487 { 00:28:06.487 "code": -22, 00:28:06.487 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:06.487 } 00:28:06.487 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:28:06.487 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:06.487 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:06.487 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:06.487 02:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.423 02:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.682 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:07.682 "name": "raid_bdev1", 00:28:07.682 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:07.682 "strip_size_kb": 0, 00:28:07.682 "state": "online", 00:28:07.682 "raid_level": "raid1", 00:28:07.682 "superblock": true, 00:28:07.682 "num_base_bdevs": 2, 00:28:07.682 "num_base_bdevs_discovered": 1, 00:28:07.682 "num_base_bdevs_operational": 1, 00:28:07.682 "base_bdevs_list": [ 00:28:07.682 { 00:28:07.682 "name": null, 00:28:07.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.682 "is_configured": false, 00:28:07.682 "data_offset": 2048, 00:28:07.682 "data_size": 63488 00:28:07.682 }, 00:28:07.682 { 00:28:07.682 "name": "BaseBdev2", 00:28:07.682 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:07.682 "is_configured": true, 00:28:07.682 "data_offset": 2048, 00:28:07.682 "data_size": 63488 00:28:07.682 } 00:28:07.682 ] 00:28:07.682 }' 00:28:07.682 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:07.682 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:08.251 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:08.251 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:08.251 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:08.251 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:08.251 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:08.251 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.251 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.509 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:08.509 "name": "raid_bdev1", 00:28:08.509 "uuid": "b9a6fee8-43a4-4e43-b1a2-5875fb1df048", 00:28:08.509 "strip_size_kb": 0, 00:28:08.509 "state": "online", 00:28:08.509 "raid_level": "raid1", 00:28:08.509 "superblock": true, 00:28:08.509 "num_base_bdevs": 2, 00:28:08.509 "num_base_bdevs_discovered": 1, 00:28:08.509 "num_base_bdevs_operational": 1, 00:28:08.509 "base_bdevs_list": [ 00:28:08.509 { 00:28:08.509 "name": null, 00:28:08.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.509 "is_configured": false, 00:28:08.509 "data_offset": 2048, 00:28:08.509 "data_size": 63488 00:28:08.509 }, 00:28:08.509 { 00:28:08.509 "name": "BaseBdev2", 00:28:08.509 "uuid": "11e18c94-7e4f-5cd3-b89b-dcd9c174f0c3", 00:28:08.509 "is_configured": true, 00:28:08.509 "data_offset": 2048, 00:28:08.509 "data_size": 63488 00:28:08.509 } 00:28:08.509 ] 00:28:08.509 }' 00:28:08.509 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:08.509 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:08.509 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:08.767 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:08.767 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2018900 00:28:08.767 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2018900 ']' 00:28:08.767 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2018900 00:28:08.767 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:28:08.767 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.767 02:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2018900 00:28:08.767 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:08.767 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:08.767 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2018900' 00:28:08.767 killing process with pid 2018900 00:28:08.767 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2018900 00:28:08.767 Received shutdown signal, test time was about 27.387220 seconds 00:28:08.767 00:28:08.767 Latency(us) 00:28:08.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.767 =================================================================================================================== 00:28:08.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.767 [2024-07-11 02:33:59.019703] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:08.767 [2024-07-11 02:33:59.019803] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:08.767 [2024-07-11 02:33:59.019848] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:08.767 [2024-07-11 02:33:59.019861] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x193b490 name raid_bdev1, state offline 00:28:08.767 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2018900 00:28:08.767 [2024-07-11 02:33:59.043105] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:28:09.026 00:28:09.026 real 0m32.276s 00:28:09.026 user 0m51.425s 00:28:09.026 sys 0m4.793s 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:09.026 ************************************ 00:28:09.026 END TEST raid_rebuild_test_sb_io 00:28:09.026 ************************************ 00:28:09.026 02:33:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:09.026 02:33:59 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:28:09.026 02:33:59 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:28:09.026 02:33:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:09.026 02:33:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.026 02:33:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:09.026 ************************************ 00:28:09.026 START TEST raid_rebuild_test 00:28:09.026 ************************************ 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2023466 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2023466 /var/tmp/spdk-raid.sock 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2023466 ']' 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:09.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.026 02:33:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.026 [2024-07-11 02:33:59.419249] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:28:09.026 [2024-07-11 02:33:59.419320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023466 ] 00:28:09.026 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:09.026 Zero copy mechanism will not be used. 00:28:09.285 [2024-07-11 02:33:59.556693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.285 [2024-07-11 02:33:59.606508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.285 [2024-07-11 02:33:59.669603] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:09.285 [2024-07-11 02:33:59.669629] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:10.221 02:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.221 02:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:28:10.221 02:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:10.221 02:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:10.221 BaseBdev1_malloc 00:28:10.221 02:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:10.481 [2024-07-11 02:34:00.763128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:10.481 [2024-07-11 02:34:00.763175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.481 [2024-07-11 02:34:00.763197] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x154fee0 00:28:10.481 [2024-07-11 02:34:00.763209] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.481 [2024-07-11 02:34:00.764850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.481 [2024-07-11 02:34:00.764882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:10.481 BaseBdev1 00:28:10.481 02:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:10.481 02:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:10.739 BaseBdev2_malloc 00:28:10.739 02:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:10.998 [2024-07-11 02:34:01.289303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:10.998 [2024-07-11 02:34:01.289353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.998 [2024-07-11 02:34:01.289372] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1551870 00:28:10.998 [2024-07-11 02:34:01.289384] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.998 [2024-07-11 02:34:01.290771] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.998 [2024-07-11 02:34:01.290799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:10.998 BaseBdev2 00:28:10.998 02:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:10.998 02:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:11.257 BaseBdev3_malloc 00:28:11.257 02:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:11.515 [2024-07-11 02:34:01.815236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:11.515 [2024-07-11 02:34:01.815284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.515 [2024-07-11 02:34:01.815304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1548b20 00:28:11.515 [2024-07-11 02:34:01.815316] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.515 [2024-07-11 02:34:01.816706] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.515 [2024-07-11 02:34:01.816736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:11.515 BaseBdev3 00:28:11.515 02:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:11.515 02:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:11.774 BaseBdev4_malloc 00:28:11.774 02:34:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:12.032 [2024-07-11 02:34:02.329128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:12.032 [2024-07-11 02:34:02.329172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.032 [2024-07-11 02:34:02.329193] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x154c8d0 00:28:12.032 [2024-07-11 02:34:02.329205] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.032 [2024-07-11 02:34:02.330511] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.032 [2024-07-11 02:34:02.330539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:12.032 BaseBdev4 00:28:12.032 02:34:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:12.291 spare_malloc 00:28:12.291 02:34:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:12.549 spare_delay 00:28:12.549 02:34:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:12.808 [2024-07-11 02:34:03.079497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:12.809 [2024-07-11 02:34:03.079542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.809 [2024-07-11 02:34:03.079565] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8a0 00:28:12.809 [2024-07-11 02:34:03.079577] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.809 [2024-07-11 02:34:03.081034] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.809 [2024-07-11 02:34:03.081064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:12.809 spare 00:28:12.809 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:13.067 [2024-07-11 02:34:03.340210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:13.067 [2024-07-11 02:34:03.341403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:13.067 [2024-07-11 02:34:03.341454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:13.067 [2024-07-11 02:34:03.341501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:13.067 [2024-07-11 02:34:03.341578] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x139f300 00:28:13.067 [2024-07-11 02:34:03.341588] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:13.067 [2024-07-11 02:34:03.341795] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1540dd0 00:28:13.067 [2024-07-11 02:34:03.341938] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x139f300 00:28:13.067 [2024-07-11 02:34:03.341948] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x139f300 00:28:13.067 [2024-07-11 02:34:03.342056] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.067 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.326 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.326 "name": "raid_bdev1", 00:28:13.326 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:13.326 "strip_size_kb": 0, 00:28:13.326 "state": "online", 00:28:13.326 "raid_level": "raid1", 00:28:13.326 "superblock": false, 00:28:13.326 "num_base_bdevs": 4, 00:28:13.326 "num_base_bdevs_discovered": 4, 00:28:13.326 "num_base_bdevs_operational": 4, 00:28:13.326 "base_bdevs_list": [ 00:28:13.326 { 00:28:13.326 "name": "BaseBdev1", 00:28:13.326 "uuid": "b26d98be-3218-51a3-b888-77707a89190f", 00:28:13.326 "is_configured": true, 00:28:13.326 "data_offset": 0, 00:28:13.326 "data_size": 65536 00:28:13.326 }, 00:28:13.326 { 00:28:13.326 "name": "BaseBdev2", 00:28:13.326 "uuid": "d0aa7876-b890-5d56-8a24-057841d3df5b", 00:28:13.326 "is_configured": true, 00:28:13.326 "data_offset": 0, 00:28:13.326 "data_size": 65536 00:28:13.326 }, 00:28:13.326 { 00:28:13.326 "name": "BaseBdev3", 00:28:13.326 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:13.326 "is_configured": true, 00:28:13.326 "data_offset": 0, 00:28:13.326 "data_size": 65536 00:28:13.326 }, 00:28:13.326 { 00:28:13.326 "name": "BaseBdev4", 00:28:13.326 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:13.326 "is_configured": true, 00:28:13.326 "data_offset": 0, 00:28:13.326 "data_size": 65536 00:28:13.326 } 00:28:13.326 ] 00:28:13.326 }' 00:28:13.326 02:34:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.326 02:34:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:13.893 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:13.893 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:14.152 [2024-07-11 02:34:04.463481] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:14.152 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:14.152 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:14.152 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:14.410 02:34:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:14.979 [2024-07-11 02:34:05.153048] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x154e800 00:28:14.979 /dev/nbd0 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:14.979 1+0 records in 00:28:14.979 1+0 records out 00:28:14.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284132 s, 14.4 MB/s 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:14.979 02:34:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:23.116 65536+0 records in 00:28:23.116 65536+0 records out 00:28:23.116 33554432 bytes (34 MB, 32 MiB) copied, 7.89804 s, 4.2 MB/s 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:23.116 [2024-07-11 02:34:13.385516] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:23.116 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:23.373 [2024-07-11 02:34:13.618204] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:23.373 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:23.373 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.373 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.374 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.632 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.632 "name": "raid_bdev1", 00:28:23.632 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:23.632 "strip_size_kb": 0, 00:28:23.632 "state": "online", 00:28:23.632 "raid_level": "raid1", 00:28:23.632 "superblock": false, 00:28:23.632 "num_base_bdevs": 4, 00:28:23.632 "num_base_bdevs_discovered": 3, 00:28:23.632 "num_base_bdevs_operational": 3, 00:28:23.632 "base_bdevs_list": [ 00:28:23.632 { 00:28:23.632 "name": null, 00:28:23.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.632 "is_configured": false, 00:28:23.632 "data_offset": 0, 00:28:23.632 "data_size": 65536 00:28:23.632 }, 00:28:23.632 { 00:28:23.632 "name": "BaseBdev2", 00:28:23.632 "uuid": "d0aa7876-b890-5d56-8a24-057841d3df5b", 00:28:23.632 "is_configured": true, 00:28:23.632 "data_offset": 0, 00:28:23.632 "data_size": 65536 00:28:23.632 }, 00:28:23.632 { 00:28:23.632 "name": "BaseBdev3", 00:28:23.632 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:23.632 "is_configured": true, 00:28:23.632 "data_offset": 0, 00:28:23.632 "data_size": 65536 00:28:23.632 }, 00:28:23.632 { 00:28:23.632 "name": "BaseBdev4", 00:28:23.632 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:23.632 "is_configured": true, 00:28:23.632 "data_offset": 0, 00:28:23.632 "data_size": 65536 00:28:23.632 } 00:28:23.632 ] 00:28:23.632 }' 00:28:23.632 02:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.632 02:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.200 02:34:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:24.459 [2024-07-11 02:34:14.717120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:24.459 [2024-07-11 02:34:14.721039] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1542e40 00:28:24.459 [2024-07-11 02:34:14.723340] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:24.459 02:34:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:25.397 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.397 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:25.397 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:25.397 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:25.397 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:25.397 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.397 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.656 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.656 "name": "raid_bdev1", 00:28:25.656 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:25.656 "strip_size_kb": 0, 00:28:25.656 "state": "online", 00:28:25.656 "raid_level": "raid1", 00:28:25.656 "superblock": false, 00:28:25.656 "num_base_bdevs": 4, 00:28:25.656 "num_base_bdevs_discovered": 4, 00:28:25.656 "num_base_bdevs_operational": 4, 00:28:25.656 "process": { 00:28:25.656 "type": "rebuild", 00:28:25.656 "target": "spare", 00:28:25.656 "progress": { 00:28:25.656 "blocks": 24576, 00:28:25.656 "percent": 37 00:28:25.656 } 00:28:25.656 }, 00:28:25.656 "base_bdevs_list": [ 00:28:25.656 { 00:28:25.656 "name": "spare", 00:28:25.656 "uuid": "785b3605-c5cf-5e8e-87e8-9c802ede9c08", 00:28:25.656 "is_configured": true, 00:28:25.656 "data_offset": 0, 00:28:25.656 "data_size": 65536 00:28:25.656 }, 00:28:25.656 { 00:28:25.656 "name": "BaseBdev2", 00:28:25.656 "uuid": "d0aa7876-b890-5d56-8a24-057841d3df5b", 00:28:25.656 "is_configured": true, 00:28:25.656 "data_offset": 0, 00:28:25.656 "data_size": 65536 00:28:25.656 }, 00:28:25.656 { 00:28:25.656 "name": "BaseBdev3", 00:28:25.656 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:25.656 "is_configured": true, 00:28:25.656 "data_offset": 0, 00:28:25.656 "data_size": 65536 00:28:25.656 }, 00:28:25.656 { 00:28:25.656 "name": "BaseBdev4", 00:28:25.656 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:25.656 "is_configured": true, 00:28:25.656 "data_offset": 0, 00:28:25.656 "data_size": 65536 00:28:25.656 } 00:28:25.656 ] 00:28:25.656 }' 00:28:25.656 02:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:25.656 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.656 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:25.915 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.915 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:25.915 [2024-07-11 02:34:16.309272] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:25.915 [2024-07-11 02:34:16.336274] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:25.915 [2024-07-11 02:34:16.336319] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.915 [2024-07-11 02:34:16.336336] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:25.915 [2024-07-11 02:34:16.336345] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:26.174 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:26.174 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.175 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.434 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.434 "name": "raid_bdev1", 00:28:26.434 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:26.434 "strip_size_kb": 0, 00:28:26.434 "state": "online", 00:28:26.434 "raid_level": "raid1", 00:28:26.434 "superblock": false, 00:28:26.434 "num_base_bdevs": 4, 00:28:26.434 "num_base_bdevs_discovered": 3, 00:28:26.434 "num_base_bdevs_operational": 3, 00:28:26.434 "base_bdevs_list": [ 00:28:26.434 { 00:28:26.434 "name": null, 00:28:26.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.434 "is_configured": false, 00:28:26.434 "data_offset": 0, 00:28:26.434 "data_size": 65536 00:28:26.434 }, 00:28:26.434 { 00:28:26.434 "name": "BaseBdev2", 00:28:26.434 "uuid": "d0aa7876-b890-5d56-8a24-057841d3df5b", 00:28:26.434 "is_configured": true, 00:28:26.434 "data_offset": 0, 00:28:26.434 "data_size": 65536 00:28:26.434 }, 00:28:26.434 { 00:28:26.434 "name": "BaseBdev3", 00:28:26.434 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:26.434 "is_configured": true, 00:28:26.434 "data_offset": 0, 00:28:26.434 "data_size": 65536 00:28:26.434 }, 00:28:26.434 { 00:28:26.434 "name": "BaseBdev4", 00:28:26.434 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:26.434 "is_configured": true, 00:28:26.435 "data_offset": 0, 00:28:26.435 "data_size": 65536 00:28:26.435 } 00:28:26.435 ] 00:28:26.435 }' 00:28:26.435 02:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.435 02:34:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.003 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:27.003 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.003 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:27.003 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:27.003 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.003 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.003 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.262 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.262 "name": "raid_bdev1", 00:28:27.262 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:27.262 "strip_size_kb": 0, 00:28:27.262 "state": "online", 00:28:27.262 "raid_level": "raid1", 00:28:27.262 "superblock": false, 00:28:27.262 "num_base_bdevs": 4, 00:28:27.262 "num_base_bdevs_discovered": 3, 00:28:27.262 "num_base_bdevs_operational": 3, 00:28:27.262 "base_bdevs_list": [ 00:28:27.262 { 00:28:27.262 "name": null, 00:28:27.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.262 "is_configured": false, 00:28:27.262 "data_offset": 0, 00:28:27.262 "data_size": 65536 00:28:27.262 }, 00:28:27.262 { 00:28:27.262 "name": "BaseBdev2", 00:28:27.262 "uuid": "d0aa7876-b890-5d56-8a24-057841d3df5b", 00:28:27.262 "is_configured": true, 00:28:27.262 "data_offset": 0, 00:28:27.262 "data_size": 65536 00:28:27.262 }, 00:28:27.262 { 00:28:27.262 "name": "BaseBdev3", 00:28:27.262 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:27.262 "is_configured": true, 00:28:27.262 "data_offset": 0, 00:28:27.262 "data_size": 65536 00:28:27.262 }, 00:28:27.262 { 00:28:27.262 "name": "BaseBdev4", 00:28:27.262 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:27.262 "is_configured": true, 00:28:27.262 "data_offset": 0, 00:28:27.262 "data_size": 65536 00:28:27.262 } 00:28:27.262 ] 00:28:27.262 }' 00:28:27.262 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:27.262 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:27.262 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:27.262 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:27.262 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:27.521 [2024-07-11 02:34:17.828905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:27.521 [2024-07-11 02:34:17.833409] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x154e3f0 00:28:27.521 [2024-07-11 02:34:17.834933] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:27.521 02:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:28.459 02:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:28.459 02:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:28.459 02:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:28.459 02:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:28.459 02:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:28.459 02:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.459 02:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.717 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:28.717 "name": "raid_bdev1", 00:28:28.717 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:28.718 "strip_size_kb": 0, 00:28:28.718 "state": "online", 00:28:28.718 "raid_level": "raid1", 00:28:28.718 "superblock": false, 00:28:28.718 "num_base_bdevs": 4, 00:28:28.718 "num_base_bdevs_discovered": 4, 00:28:28.718 "num_base_bdevs_operational": 4, 00:28:28.718 "process": { 00:28:28.718 "type": "rebuild", 00:28:28.718 "target": "spare", 00:28:28.718 "progress": { 00:28:28.718 "blocks": 22528, 00:28:28.718 "percent": 34 00:28:28.718 } 00:28:28.718 }, 00:28:28.718 "base_bdevs_list": [ 00:28:28.718 { 00:28:28.718 "name": "spare", 00:28:28.718 "uuid": "785b3605-c5cf-5e8e-87e8-9c802ede9c08", 00:28:28.718 "is_configured": true, 00:28:28.718 "data_offset": 0, 00:28:28.718 "data_size": 65536 00:28:28.718 }, 00:28:28.718 { 00:28:28.718 "name": "BaseBdev2", 00:28:28.718 "uuid": "d0aa7876-b890-5d56-8a24-057841d3df5b", 00:28:28.718 "is_configured": true, 00:28:28.718 "data_offset": 0, 00:28:28.718 "data_size": 65536 00:28:28.718 }, 00:28:28.718 { 00:28:28.718 "name": "BaseBdev3", 00:28:28.718 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:28.718 "is_configured": true, 00:28:28.718 "data_offset": 0, 00:28:28.718 "data_size": 65536 00:28:28.718 }, 00:28:28.718 { 00:28:28.718 "name": "BaseBdev4", 00:28:28.718 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:28.718 "is_configured": true, 00:28:28.718 "data_offset": 0, 00:28:28.718 "data_size": 65536 00:28:28.718 } 00:28:28.718 ] 00:28:28.718 }' 00:28:28.718 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:28.718 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:28.718 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:28.977 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:28.977 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:28.977 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:28.977 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:28.977 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:28.977 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:28.977 [2024-07-11 02:34:19.370372] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:29.236 [2024-07-11 02:34:19.447417] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x154e3f0 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.236 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.495 "name": "raid_bdev1", 00:28:29.495 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:29.495 "strip_size_kb": 0, 00:28:29.495 "state": "online", 00:28:29.495 "raid_level": "raid1", 00:28:29.495 "superblock": false, 00:28:29.495 "num_base_bdevs": 4, 00:28:29.495 "num_base_bdevs_discovered": 3, 00:28:29.495 "num_base_bdevs_operational": 3, 00:28:29.495 "process": { 00:28:29.495 "type": "rebuild", 00:28:29.495 "target": "spare", 00:28:29.495 "progress": { 00:28:29.495 "blocks": 34816, 00:28:29.495 "percent": 53 00:28:29.495 } 00:28:29.495 }, 00:28:29.495 "base_bdevs_list": [ 00:28:29.495 { 00:28:29.495 "name": "spare", 00:28:29.495 "uuid": "785b3605-c5cf-5e8e-87e8-9c802ede9c08", 00:28:29.495 "is_configured": true, 00:28:29.495 "data_offset": 0, 00:28:29.495 "data_size": 65536 00:28:29.495 }, 00:28:29.495 { 00:28:29.495 "name": null, 00:28:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.495 "is_configured": false, 00:28:29.495 "data_offset": 0, 00:28:29.495 "data_size": 65536 00:28:29.495 }, 00:28:29.495 { 00:28:29.495 "name": "BaseBdev3", 00:28:29.495 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:29.495 "is_configured": true, 00:28:29.495 "data_offset": 0, 00:28:29.495 "data_size": 65536 00:28:29.495 }, 00:28:29.495 { 00:28:29.495 "name": "BaseBdev4", 00:28:29.495 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:29.495 "is_configured": true, 00:28:29.495 "data_offset": 0, 00:28:29.495 "data_size": 65536 00:28:29.495 } 00:28:29.495 ] 00:28:29.495 }' 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=912 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.495 02:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.754 02:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.754 "name": "raid_bdev1", 00:28:29.754 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:29.754 "strip_size_kb": 0, 00:28:29.754 "state": "online", 00:28:29.754 "raid_level": "raid1", 00:28:29.754 "superblock": false, 00:28:29.754 "num_base_bdevs": 4, 00:28:29.754 "num_base_bdevs_discovered": 3, 00:28:29.754 "num_base_bdevs_operational": 3, 00:28:29.754 "process": { 00:28:29.754 "type": "rebuild", 00:28:29.754 "target": "spare", 00:28:29.754 "progress": { 00:28:29.754 "blocks": 43008, 00:28:29.754 "percent": 65 00:28:29.754 } 00:28:29.754 }, 00:28:29.754 "base_bdevs_list": [ 00:28:29.754 { 00:28:29.754 "name": "spare", 00:28:29.754 "uuid": "785b3605-c5cf-5e8e-87e8-9c802ede9c08", 00:28:29.754 "is_configured": true, 00:28:29.754 "data_offset": 0, 00:28:29.754 "data_size": 65536 00:28:29.754 }, 00:28:29.754 { 00:28:29.754 "name": null, 00:28:29.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.754 "is_configured": false, 00:28:29.754 "data_offset": 0, 00:28:29.754 "data_size": 65536 00:28:29.754 }, 00:28:29.754 { 00:28:29.754 "name": "BaseBdev3", 00:28:29.754 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:29.754 "is_configured": true, 00:28:29.754 "data_offset": 0, 00:28:29.754 "data_size": 65536 00:28:29.754 }, 00:28:29.754 { 00:28:29.754 "name": "BaseBdev4", 00:28:29.754 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:29.754 "is_configured": true, 00:28:29.754 "data_offset": 0, 00:28:29.755 "data_size": 65536 00:28:29.755 } 00:28:29.755 ] 00:28:29.755 }' 00:28:29.755 02:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.755 02:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:29.755 02:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.755 02:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:29.755 02:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:30.691 [2024-07-11 02:34:21.059867] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:30.691 [2024-07-11 02:34:21.059940] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:30.691 [2024-07-11 02:34:21.059979] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.950 "name": "raid_bdev1", 00:28:30.950 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:30.950 "strip_size_kb": 0, 00:28:30.950 "state": "online", 00:28:30.950 "raid_level": "raid1", 00:28:30.950 "superblock": false, 00:28:30.950 "num_base_bdevs": 4, 00:28:30.950 "num_base_bdevs_discovered": 3, 00:28:30.950 "num_base_bdevs_operational": 3, 00:28:30.950 "base_bdevs_list": [ 00:28:30.950 { 00:28:30.950 "name": "spare", 00:28:30.950 "uuid": "785b3605-c5cf-5e8e-87e8-9c802ede9c08", 00:28:30.950 "is_configured": true, 00:28:30.950 "data_offset": 0, 00:28:30.950 "data_size": 65536 00:28:30.950 }, 00:28:30.950 { 00:28:30.950 "name": null, 00:28:30.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.950 "is_configured": false, 00:28:30.950 "data_offset": 0, 00:28:30.950 "data_size": 65536 00:28:30.950 }, 00:28:30.950 { 00:28:30.950 "name": "BaseBdev3", 00:28:30.950 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:30.950 "is_configured": true, 00:28:30.950 "data_offset": 0, 00:28:30.950 "data_size": 65536 00:28:30.950 }, 00:28:30.950 { 00:28:30.950 "name": "BaseBdev4", 00:28:30.950 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:30.950 "is_configured": true, 00:28:30.950 "data_offset": 0, 00:28:30.950 "data_size": 65536 00:28:30.950 } 00:28:30.950 ] 00:28:30.950 }' 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:30.950 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.209 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.470 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.470 "name": "raid_bdev1", 00:28:31.470 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:31.470 "strip_size_kb": 0, 00:28:31.470 "state": "online", 00:28:31.470 "raid_level": "raid1", 00:28:31.470 "superblock": false, 00:28:31.470 "num_base_bdevs": 4, 00:28:31.470 "num_base_bdevs_discovered": 3, 00:28:31.470 "num_base_bdevs_operational": 3, 00:28:31.470 "base_bdevs_list": [ 00:28:31.470 { 00:28:31.470 "name": "spare", 00:28:31.470 "uuid": "785b3605-c5cf-5e8e-87e8-9c802ede9c08", 00:28:31.470 "is_configured": true, 00:28:31.470 "data_offset": 0, 00:28:31.470 "data_size": 65536 00:28:31.470 }, 00:28:31.470 { 00:28:31.470 "name": null, 00:28:31.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.470 "is_configured": false, 00:28:31.471 "data_offset": 0, 00:28:31.471 "data_size": 65536 00:28:31.471 }, 00:28:31.471 { 00:28:31.471 "name": "BaseBdev3", 00:28:31.471 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:31.471 "is_configured": true, 00:28:31.471 "data_offset": 0, 00:28:31.471 "data_size": 65536 00:28:31.471 }, 00:28:31.471 { 00:28:31.471 "name": "BaseBdev4", 00:28:31.471 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:31.471 "is_configured": true, 00:28:31.471 "data_offset": 0, 00:28:31.471 "data_size": 65536 00:28:31.471 } 00:28:31.471 ] 00:28:31.471 }' 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.471 02:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.767 02:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.767 "name": "raid_bdev1", 00:28:31.767 "uuid": "ab7142da-9aa5-478f-bd96-27d8a62f59e9", 00:28:31.767 "strip_size_kb": 0, 00:28:31.767 "state": "online", 00:28:31.767 "raid_level": "raid1", 00:28:31.767 "superblock": false, 00:28:31.767 "num_base_bdevs": 4, 00:28:31.767 "num_base_bdevs_discovered": 3, 00:28:31.767 "num_base_bdevs_operational": 3, 00:28:31.767 "base_bdevs_list": [ 00:28:31.767 { 00:28:31.767 "name": "spare", 00:28:31.767 "uuid": "785b3605-c5cf-5e8e-87e8-9c802ede9c08", 00:28:31.767 "is_configured": true, 00:28:31.767 "data_offset": 0, 00:28:31.767 "data_size": 65536 00:28:31.767 }, 00:28:31.767 { 00:28:31.767 "name": null, 00:28:31.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.767 "is_configured": false, 00:28:31.767 "data_offset": 0, 00:28:31.767 "data_size": 65536 00:28:31.767 }, 00:28:31.767 { 00:28:31.767 "name": "BaseBdev3", 00:28:31.767 "uuid": "1239ec31-ac18-5cda-b525-2d3e9dba750a", 00:28:31.767 "is_configured": true, 00:28:31.767 "data_offset": 0, 00:28:31.767 "data_size": 65536 00:28:31.767 }, 00:28:31.767 { 00:28:31.767 "name": "BaseBdev4", 00:28:31.767 "uuid": "453964d8-5bc0-5652-9eda-16de0adbf100", 00:28:31.767 "is_configured": true, 00:28:31.767 "data_offset": 0, 00:28:31.767 "data_size": 65536 00:28:31.767 } 00:28:31.767 ] 00:28:31.767 }' 00:28:31.767 02:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.767 02:34:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.359 02:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:32.619 [2024-07-11 02:34:22.877165] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:32.619 [2024-07-11 02:34:22.877194] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:32.619 [2024-07-11 02:34:22.877252] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.619 [2024-07-11 02:34:22.877326] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:32.619 [2024-07-11 02:34:22.877338] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x139f300 name raid_bdev1, state offline 00:28:32.619 02:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.619 02:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:32.877 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:33.137 /dev/nbd0 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.137 1+0 records in 00:28:33.137 1+0 records out 00:28:33.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264714 s, 15.5 MB/s 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:33.137 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:33.396 /dev/nbd1 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.396 1+0 records in 00:28:33.396 1+0 records out 00:28:33.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307221 s, 13.3 MB/s 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.396 02:34:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.653 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2023466 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2023466 ']' 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2023466 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:33.911 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2023466 00:28:34.169 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.169 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.169 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2023466' 00:28:34.169 killing process with pid 2023466 00:28:34.169 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2023466 00:28:34.169 Received shutdown signal, test time was about 60.000000 seconds 00:28:34.169 00:28:34.169 Latency(us) 00:28:34.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.169 =================================================================================================================== 00:28:34.169 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:34.169 [2024-07-11 02:34:24.355723] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:34.169 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2023466 00:28:34.169 [2024-07-11 02:34:24.403549] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:28:34.428 00:28:34.428 real 0m25.263s 00:28:34.428 user 0m33.833s 00:28:34.428 sys 0m5.567s 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.428 ************************************ 00:28:34.428 END TEST raid_rebuild_test 00:28:34.428 ************************************ 00:28:34.428 02:34:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:34.428 02:34:24 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:28:34.428 02:34:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:34.428 02:34:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.428 02:34:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:34.428 ************************************ 00:28:34.428 START TEST raid_rebuild_test_sb 00:28:34.428 ************************************ 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2026890 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2026890 /var/tmp/spdk-raid.sock 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2026890 ']' 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:34.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.428 02:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:34.428 [2024-07-11 02:34:24.780732] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:28:34.428 [2024-07-11 02:34:24.780816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026890 ] 00:28:34.428 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:34.428 Zero copy mechanism will not be used. 00:28:34.686 [2024-07-11 02:34:24.917575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.687 [2024-07-11 02:34:24.970824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.687 [2024-07-11 02:34:25.037486] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:34.687 [2024-07-11 02:34:25.037529] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:35.620 02:34:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.620 02:34:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:28:35.620 02:34:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:35.620 02:34:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:35.620 BaseBdev1_malloc 00:28:35.620 02:34:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:35.878 [2024-07-11 02:34:26.192616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:35.878 [2024-07-11 02:34:26.192664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:35.878 [2024-07-11 02:34:26.192689] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc18ee0 00:28:35.878 [2024-07-11 02:34:26.192701] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:35.878 [2024-07-11 02:34:26.194355] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:35.878 [2024-07-11 02:34:26.194386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:35.878 BaseBdev1 00:28:35.878 02:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:35.878 02:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:36.136 BaseBdev2_malloc 00:28:36.136 02:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:36.395 [2024-07-11 02:34:26.703903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:36.395 [2024-07-11 02:34:26.703946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.395 [2024-07-11 02:34:26.703967] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1a870 00:28:36.395 [2024-07-11 02:34:26.703979] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.395 [2024-07-11 02:34:26.705378] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.395 [2024-07-11 02:34:26.705407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:36.395 BaseBdev2 00:28:36.395 02:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:36.395 02:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:36.653 BaseBdev3_malloc 00:28:36.653 02:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:36.912 [2024-07-11 02:34:27.206943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:36.912 [2024-07-11 02:34:27.206989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.912 [2024-07-11 02:34:27.207012] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc11b20 00:28:36.912 [2024-07-11 02:34:27.207025] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.912 [2024-07-11 02:34:27.208545] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.912 [2024-07-11 02:34:27.208574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:36.912 BaseBdev3 00:28:36.912 02:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:36.912 02:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:37.170 BaseBdev4_malloc 00:28:37.170 02:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:37.429 [2024-07-11 02:34:27.689902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:37.429 [2024-07-11 02:34:27.689946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.429 [2024-07-11 02:34:27.689968] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc158d0 00:28:37.429 [2024-07-11 02:34:27.689981] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.429 [2024-07-11 02:34:27.691516] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.429 [2024-07-11 02:34:27.691544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:37.429 BaseBdev4 00:28:37.429 02:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:37.688 spare_malloc 00:28:37.688 02:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:37.947 spare_delay 00:28:37.947 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:38.205 [2024-07-11 02:34:28.441562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:38.205 [2024-07-11 02:34:28.441606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:38.205 [2024-07-11 02:34:28.441629] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa678a0 00:28:38.205 [2024-07-11 02:34:28.441641] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:38.205 [2024-07-11 02:34:28.443086] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:38.205 [2024-07-11 02:34:28.443114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:38.205 spare 00:28:38.205 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:38.464 [2024-07-11 02:34:28.698264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:38.464 [2024-07-11 02:34:28.699445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:38.464 [2024-07-11 02:34:28.699496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:38.464 [2024-07-11 02:34:28.699542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:38.464 [2024-07-11 02:34:28.699725] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa68300 00:28:38.464 [2024-07-11 02:34:28.699736] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:38.464 [2024-07-11 02:34:28.699929] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc082d0 00:28:38.464 [2024-07-11 02:34:28.700075] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa68300 00:28:38.464 [2024-07-11 02:34:28.700085] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa68300 00:28:38.464 [2024-07-11 02:34:28.700178] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.464 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.465 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.724 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.724 "name": "raid_bdev1", 00:28:38.724 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:38.724 "strip_size_kb": 0, 00:28:38.724 "state": "online", 00:28:38.724 "raid_level": "raid1", 00:28:38.724 "superblock": true, 00:28:38.724 "num_base_bdevs": 4, 00:28:38.724 "num_base_bdevs_discovered": 4, 00:28:38.724 "num_base_bdevs_operational": 4, 00:28:38.724 "base_bdevs_list": [ 00:28:38.724 { 00:28:38.724 "name": "BaseBdev1", 00:28:38.724 "uuid": "4f5b9999-1889-5f0d-9179-8ff4942b4943", 00:28:38.724 "is_configured": true, 00:28:38.724 "data_offset": 2048, 00:28:38.724 "data_size": 63488 00:28:38.724 }, 00:28:38.724 { 00:28:38.724 "name": "BaseBdev2", 00:28:38.724 "uuid": "c18e2872-e9f1-5255-b321-770b5cf08b21", 00:28:38.724 "is_configured": true, 00:28:38.724 "data_offset": 2048, 00:28:38.724 "data_size": 63488 00:28:38.724 }, 00:28:38.724 { 00:28:38.724 "name": "BaseBdev3", 00:28:38.724 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:38.724 "is_configured": true, 00:28:38.724 "data_offset": 2048, 00:28:38.724 "data_size": 63488 00:28:38.724 }, 00:28:38.724 { 00:28:38.724 "name": "BaseBdev4", 00:28:38.724 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:38.724 "is_configured": true, 00:28:38.724 "data_offset": 2048, 00:28:38.724 "data_size": 63488 00:28:38.724 } 00:28:38.724 ] 00:28:38.724 }' 00:28:38.724 02:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.724 02:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.291 02:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:39.291 02:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:39.549 [2024-07-11 02:34:29.733274] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:39.549 02:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:39.549 02:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.549 02:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:39.808 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:40.065 [2024-07-11 02:34:30.234371] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc079e0 00:28:40.065 /dev/nbd0 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.065 1+0 records in 00:28:40.065 1+0 records out 00:28:40.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234456 s, 17.5 MB/s 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:40.065 02:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:48.187 63488+0 records in 00:28:48.187 63488+0 records out 00:28:48.187 32505856 bytes (33 MB, 31 MiB) copied, 7.79765 s, 4.2 MB/s 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:48.187 [2024-07-11 02:34:38.363432] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:48.187 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:48.187 [2024-07-11 02:34:38.604190] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.448 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.707 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.707 "name": "raid_bdev1", 00:28:48.707 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:48.707 "strip_size_kb": 0, 00:28:48.707 "state": "online", 00:28:48.707 "raid_level": "raid1", 00:28:48.707 "superblock": true, 00:28:48.707 "num_base_bdevs": 4, 00:28:48.707 "num_base_bdevs_discovered": 3, 00:28:48.707 "num_base_bdevs_operational": 3, 00:28:48.707 "base_bdevs_list": [ 00:28:48.707 { 00:28:48.707 "name": null, 00:28:48.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.707 "is_configured": false, 00:28:48.707 "data_offset": 2048, 00:28:48.707 "data_size": 63488 00:28:48.707 }, 00:28:48.707 { 00:28:48.707 "name": "BaseBdev2", 00:28:48.707 "uuid": "c18e2872-e9f1-5255-b321-770b5cf08b21", 00:28:48.707 "is_configured": true, 00:28:48.707 "data_offset": 2048, 00:28:48.707 "data_size": 63488 00:28:48.707 }, 00:28:48.707 { 00:28:48.707 "name": "BaseBdev3", 00:28:48.707 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:48.707 "is_configured": true, 00:28:48.707 "data_offset": 2048, 00:28:48.707 "data_size": 63488 00:28:48.707 }, 00:28:48.707 { 00:28:48.707 "name": "BaseBdev4", 00:28:48.707 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:48.707 "is_configured": true, 00:28:48.707 "data_offset": 2048, 00:28:48.707 "data_size": 63488 00:28:48.707 } 00:28:48.707 ] 00:28:48.707 }' 00:28:48.707 02:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.707 02:34:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.275 02:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:49.276 [2024-07-11 02:34:39.683166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.276 [2024-07-11 02:34:39.687053] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa68930 00:28:49.276 [2024-07-11 02:34:39.689326] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:49.534 02:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:50.472 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.472 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:50.472 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:50.472 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:50.472 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:50.472 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.472 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.730 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:50.730 "name": "raid_bdev1", 00:28:50.730 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:50.730 "strip_size_kb": 0, 00:28:50.730 "state": "online", 00:28:50.730 "raid_level": "raid1", 00:28:50.730 "superblock": true, 00:28:50.730 "num_base_bdevs": 4, 00:28:50.730 "num_base_bdevs_discovered": 4, 00:28:50.730 "num_base_bdevs_operational": 4, 00:28:50.730 "process": { 00:28:50.730 "type": "rebuild", 00:28:50.730 "target": "spare", 00:28:50.730 "progress": { 00:28:50.730 "blocks": 24576, 00:28:50.730 "percent": 38 00:28:50.730 } 00:28:50.730 }, 00:28:50.730 "base_bdevs_list": [ 00:28:50.730 { 00:28:50.730 "name": "spare", 00:28:50.730 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:28:50.730 "is_configured": true, 00:28:50.730 "data_offset": 2048, 00:28:50.730 "data_size": 63488 00:28:50.730 }, 00:28:50.730 { 00:28:50.730 "name": "BaseBdev2", 00:28:50.730 "uuid": "c18e2872-e9f1-5255-b321-770b5cf08b21", 00:28:50.730 "is_configured": true, 00:28:50.730 "data_offset": 2048, 00:28:50.730 "data_size": 63488 00:28:50.730 }, 00:28:50.730 { 00:28:50.730 "name": "BaseBdev3", 00:28:50.730 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:50.730 "is_configured": true, 00:28:50.730 "data_offset": 2048, 00:28:50.730 "data_size": 63488 00:28:50.730 }, 00:28:50.730 { 00:28:50.730 "name": "BaseBdev4", 00:28:50.730 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:50.730 "is_configured": true, 00:28:50.730 "data_offset": 2048, 00:28:50.730 "data_size": 63488 00:28:50.730 } 00:28:50.730 ] 00:28:50.730 }' 00:28:50.730 02:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:50.730 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.730 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:50.730 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.730 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:50.989 [2024-07-11 02:34:41.286356] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.989 [2024-07-11 02:34:41.301660] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:50.989 [2024-07-11 02:34:41.301705] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.989 [2024-07-11 02:34:41.301728] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.989 [2024-07-11 02:34:41.301737] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.989 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.248 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:51.248 "name": "raid_bdev1", 00:28:51.248 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:51.248 "strip_size_kb": 0, 00:28:51.248 "state": "online", 00:28:51.248 "raid_level": "raid1", 00:28:51.248 "superblock": true, 00:28:51.248 "num_base_bdevs": 4, 00:28:51.248 "num_base_bdevs_discovered": 3, 00:28:51.248 "num_base_bdevs_operational": 3, 00:28:51.248 "base_bdevs_list": [ 00:28:51.248 { 00:28:51.248 "name": null, 00:28:51.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.248 "is_configured": false, 00:28:51.248 "data_offset": 2048, 00:28:51.248 "data_size": 63488 00:28:51.248 }, 00:28:51.248 { 00:28:51.248 "name": "BaseBdev2", 00:28:51.248 "uuid": "c18e2872-e9f1-5255-b321-770b5cf08b21", 00:28:51.248 "is_configured": true, 00:28:51.248 "data_offset": 2048, 00:28:51.248 "data_size": 63488 00:28:51.248 }, 00:28:51.248 { 00:28:51.248 "name": "BaseBdev3", 00:28:51.248 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:51.249 "is_configured": true, 00:28:51.249 "data_offset": 2048, 00:28:51.249 "data_size": 63488 00:28:51.249 }, 00:28:51.249 { 00:28:51.249 "name": "BaseBdev4", 00:28:51.249 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:51.249 "is_configured": true, 00:28:51.249 "data_offset": 2048, 00:28:51.249 "data_size": 63488 00:28:51.249 } 00:28:51.249 ] 00:28:51.249 }' 00:28:51.249 02:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:51.249 02:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.817 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:51.817 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:51.817 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:51.817 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:51.817 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:51.817 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.817 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.076 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:52.076 "name": "raid_bdev1", 00:28:52.076 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:52.076 "strip_size_kb": 0, 00:28:52.076 "state": "online", 00:28:52.076 "raid_level": "raid1", 00:28:52.076 "superblock": true, 00:28:52.076 "num_base_bdevs": 4, 00:28:52.076 "num_base_bdevs_discovered": 3, 00:28:52.076 "num_base_bdevs_operational": 3, 00:28:52.076 "base_bdevs_list": [ 00:28:52.076 { 00:28:52.076 "name": null, 00:28:52.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.076 "is_configured": false, 00:28:52.076 "data_offset": 2048, 00:28:52.076 "data_size": 63488 00:28:52.076 }, 00:28:52.076 { 00:28:52.076 "name": "BaseBdev2", 00:28:52.076 "uuid": "c18e2872-e9f1-5255-b321-770b5cf08b21", 00:28:52.076 "is_configured": true, 00:28:52.076 "data_offset": 2048, 00:28:52.076 "data_size": 63488 00:28:52.076 }, 00:28:52.076 { 00:28:52.076 "name": "BaseBdev3", 00:28:52.076 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:52.076 "is_configured": true, 00:28:52.076 "data_offset": 2048, 00:28:52.076 "data_size": 63488 00:28:52.076 }, 00:28:52.076 { 00:28:52.076 "name": "BaseBdev4", 00:28:52.076 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:52.076 "is_configured": true, 00:28:52.076 "data_offset": 2048, 00:28:52.076 "data_size": 63488 00:28:52.076 } 00:28:52.076 ] 00:28:52.076 }' 00:28:52.076 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:52.076 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:52.076 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:52.076 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:52.076 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:52.335 [2024-07-11 02:34:42.637635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:52.335 [2024-07-11 02:34:42.641582] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa65b90 00:28:52.335 [2024-07-11 02:34:42.643066] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:52.335 02:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:53.271 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.271 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.271 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.271 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.271 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.271 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.271 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.529 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.529 "name": "raid_bdev1", 00:28:53.529 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:53.529 "strip_size_kb": 0, 00:28:53.529 "state": "online", 00:28:53.529 "raid_level": "raid1", 00:28:53.529 "superblock": true, 00:28:53.529 "num_base_bdevs": 4, 00:28:53.529 "num_base_bdevs_discovered": 4, 00:28:53.529 "num_base_bdevs_operational": 4, 00:28:53.529 "process": { 00:28:53.529 "type": "rebuild", 00:28:53.529 "target": "spare", 00:28:53.529 "progress": { 00:28:53.529 "blocks": 24576, 00:28:53.529 "percent": 38 00:28:53.529 } 00:28:53.529 }, 00:28:53.529 "base_bdevs_list": [ 00:28:53.529 { 00:28:53.529 "name": "spare", 00:28:53.529 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:28:53.529 "is_configured": true, 00:28:53.529 "data_offset": 2048, 00:28:53.529 "data_size": 63488 00:28:53.529 }, 00:28:53.529 { 00:28:53.529 "name": "BaseBdev2", 00:28:53.529 "uuid": "c18e2872-e9f1-5255-b321-770b5cf08b21", 00:28:53.529 "is_configured": true, 00:28:53.529 "data_offset": 2048, 00:28:53.529 "data_size": 63488 00:28:53.529 }, 00:28:53.529 { 00:28:53.529 "name": "BaseBdev3", 00:28:53.529 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:53.529 "is_configured": true, 00:28:53.529 "data_offset": 2048, 00:28:53.529 "data_size": 63488 00:28:53.529 }, 00:28:53.529 { 00:28:53.529 "name": "BaseBdev4", 00:28:53.529 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:53.529 "is_configured": true, 00:28:53.529 "data_offset": 2048, 00:28:53.529 "data_size": 63488 00:28:53.529 } 00:28:53.529 ] 00:28:53.529 }' 00:28:53.529 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.787 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.787 02:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.787 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.787 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:53.787 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:53.787 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:53.787 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:53.787 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:53.787 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:53.787 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:54.045 [2024-07-11 02:34:44.240828] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:54.045 [2024-07-11 02:34:44.355776] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xa65b90 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.045 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.304 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.304 "name": "raid_bdev1", 00:28:54.304 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:54.304 "strip_size_kb": 0, 00:28:54.304 "state": "online", 00:28:54.304 "raid_level": "raid1", 00:28:54.304 "superblock": true, 00:28:54.304 "num_base_bdevs": 4, 00:28:54.304 "num_base_bdevs_discovered": 3, 00:28:54.304 "num_base_bdevs_operational": 3, 00:28:54.304 "process": { 00:28:54.304 "type": "rebuild", 00:28:54.304 "target": "spare", 00:28:54.304 "progress": { 00:28:54.304 "blocks": 36864, 00:28:54.304 "percent": 58 00:28:54.304 } 00:28:54.304 }, 00:28:54.304 "base_bdevs_list": [ 00:28:54.304 { 00:28:54.304 "name": "spare", 00:28:54.304 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:28:54.304 "is_configured": true, 00:28:54.304 "data_offset": 2048, 00:28:54.304 "data_size": 63488 00:28:54.304 }, 00:28:54.304 { 00:28:54.304 "name": null, 00:28:54.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.304 "is_configured": false, 00:28:54.304 "data_offset": 2048, 00:28:54.304 "data_size": 63488 00:28:54.304 }, 00:28:54.304 { 00:28:54.304 "name": "BaseBdev3", 00:28:54.304 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:54.304 "is_configured": true, 00:28:54.304 "data_offset": 2048, 00:28:54.304 "data_size": 63488 00:28:54.304 }, 00:28:54.304 { 00:28:54.304 "name": "BaseBdev4", 00:28:54.304 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:54.304 "is_configured": true, 00:28:54.304 "data_offset": 2048, 00:28:54.304 "data_size": 63488 00:28:54.304 } 00:28:54.304 ] 00:28:54.304 }' 00:28:54.304 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.304 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:54.304 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=937 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.562 "name": "raid_bdev1", 00:28:54.562 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:54.562 "strip_size_kb": 0, 00:28:54.562 "state": "online", 00:28:54.562 "raid_level": "raid1", 00:28:54.562 "superblock": true, 00:28:54.562 "num_base_bdevs": 4, 00:28:54.562 "num_base_bdevs_discovered": 3, 00:28:54.562 "num_base_bdevs_operational": 3, 00:28:54.562 "process": { 00:28:54.562 "type": "rebuild", 00:28:54.562 "target": "spare", 00:28:54.562 "progress": { 00:28:54.562 "blocks": 43008, 00:28:54.562 "percent": 67 00:28:54.562 } 00:28:54.562 }, 00:28:54.562 "base_bdevs_list": [ 00:28:54.562 { 00:28:54.562 "name": "spare", 00:28:54.562 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:28:54.562 "is_configured": true, 00:28:54.562 "data_offset": 2048, 00:28:54.562 "data_size": 63488 00:28:54.562 }, 00:28:54.562 { 00:28:54.562 "name": null, 00:28:54.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.562 "is_configured": false, 00:28:54.562 "data_offset": 2048, 00:28:54.562 "data_size": 63488 00:28:54.562 }, 00:28:54.562 { 00:28:54.562 "name": "BaseBdev3", 00:28:54.562 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:54.562 "is_configured": true, 00:28:54.562 "data_offset": 2048, 00:28:54.562 "data_size": 63488 00:28:54.562 }, 00:28:54.562 { 00:28:54.562 "name": "BaseBdev4", 00:28:54.562 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:54.562 "is_configured": true, 00:28:54.562 "data_offset": 2048, 00:28:54.562 "data_size": 63488 00:28:54.562 } 00:28:54.562 ] 00:28:54.562 }' 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:54.562 02:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.821 02:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:54.821 02:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:55.757 [2024-07-11 02:34:45.867361] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:55.757 [2024-07-11 02:34:45.867418] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:55.757 [2024-07-11 02:34:45.867511] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.757 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.016 "name": "raid_bdev1", 00:28:56.016 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:56.016 "strip_size_kb": 0, 00:28:56.016 "state": "online", 00:28:56.016 "raid_level": "raid1", 00:28:56.016 "superblock": true, 00:28:56.016 "num_base_bdevs": 4, 00:28:56.016 "num_base_bdevs_discovered": 3, 00:28:56.016 "num_base_bdevs_operational": 3, 00:28:56.016 "base_bdevs_list": [ 00:28:56.016 { 00:28:56.016 "name": "spare", 00:28:56.016 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:28:56.016 "is_configured": true, 00:28:56.016 "data_offset": 2048, 00:28:56.016 "data_size": 63488 00:28:56.016 }, 00:28:56.016 { 00:28:56.016 "name": null, 00:28:56.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.016 "is_configured": false, 00:28:56.016 "data_offset": 2048, 00:28:56.016 "data_size": 63488 00:28:56.016 }, 00:28:56.016 { 00:28:56.016 "name": "BaseBdev3", 00:28:56.016 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:56.016 "is_configured": true, 00:28:56.016 "data_offset": 2048, 00:28:56.016 "data_size": 63488 00:28:56.016 }, 00:28:56.016 { 00:28:56.016 "name": "BaseBdev4", 00:28:56.016 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:56.016 "is_configured": true, 00:28:56.016 "data_offset": 2048, 00:28:56.016 "data_size": 63488 00:28:56.016 } 00:28:56.016 ] 00:28:56.016 }' 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.016 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.275 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.275 "name": "raid_bdev1", 00:28:56.275 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:56.275 "strip_size_kb": 0, 00:28:56.275 "state": "online", 00:28:56.275 "raid_level": "raid1", 00:28:56.275 "superblock": true, 00:28:56.275 "num_base_bdevs": 4, 00:28:56.275 "num_base_bdevs_discovered": 3, 00:28:56.275 "num_base_bdevs_operational": 3, 00:28:56.275 "base_bdevs_list": [ 00:28:56.275 { 00:28:56.275 "name": "spare", 00:28:56.275 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:28:56.275 "is_configured": true, 00:28:56.275 "data_offset": 2048, 00:28:56.275 "data_size": 63488 00:28:56.275 }, 00:28:56.275 { 00:28:56.275 "name": null, 00:28:56.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.275 "is_configured": false, 00:28:56.275 "data_offset": 2048, 00:28:56.275 "data_size": 63488 00:28:56.275 }, 00:28:56.275 { 00:28:56.275 "name": "BaseBdev3", 00:28:56.275 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:56.275 "is_configured": true, 00:28:56.275 "data_offset": 2048, 00:28:56.275 "data_size": 63488 00:28:56.275 }, 00:28:56.275 { 00:28:56.275 "name": "BaseBdev4", 00:28:56.275 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:56.275 "is_configured": true, 00:28:56.275 "data_offset": 2048, 00:28:56.275 "data_size": 63488 00:28:56.275 } 00:28:56.275 ] 00:28:56.275 }' 00:28:56.275 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.275 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:56.275 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.535 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.795 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.795 "name": "raid_bdev1", 00:28:56.795 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:28:56.795 "strip_size_kb": 0, 00:28:56.795 "state": "online", 00:28:56.795 "raid_level": "raid1", 00:28:56.795 "superblock": true, 00:28:56.795 "num_base_bdevs": 4, 00:28:56.795 "num_base_bdevs_discovered": 3, 00:28:56.795 "num_base_bdevs_operational": 3, 00:28:56.795 "base_bdevs_list": [ 00:28:56.795 { 00:28:56.795 "name": "spare", 00:28:56.795 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:28:56.795 "is_configured": true, 00:28:56.795 "data_offset": 2048, 00:28:56.795 "data_size": 63488 00:28:56.795 }, 00:28:56.795 { 00:28:56.795 "name": null, 00:28:56.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.795 "is_configured": false, 00:28:56.795 "data_offset": 2048, 00:28:56.795 "data_size": 63488 00:28:56.795 }, 00:28:56.795 { 00:28:56.795 "name": "BaseBdev3", 00:28:56.795 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:28:56.795 "is_configured": true, 00:28:56.795 "data_offset": 2048, 00:28:56.795 "data_size": 63488 00:28:56.795 }, 00:28:56.795 { 00:28:56.795 "name": "BaseBdev4", 00:28:56.795 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:28:56.795 "is_configured": true, 00:28:56.795 "data_offset": 2048, 00:28:56.795 "data_size": 63488 00:28:56.795 } 00:28:56.795 ] 00:28:56.795 }' 00:28:56.795 02:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.795 02:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.362 02:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:57.620 [2024-07-11 02:34:47.792975] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:57.620 [2024-07-11 02:34:47.793000] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:57.620 [2024-07-11 02:34:47.793053] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:57.620 [2024-07-11 02:34:47.793124] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:57.620 [2024-07-11 02:34:47.793136] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa68300 name raid_bdev1, state offline 00:28:57.620 02:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.620 02:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:57.880 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:57.880 /dev/nbd0 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:58.139 1+0 records in 00:28:58.139 1+0 records out 00:28:58.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263067 s, 15.6 MB/s 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:58.139 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:58.140 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:58.140 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:58.140 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:58.140 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:58.140 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:58.424 /dev/nbd1 00:28:58.424 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:58.424 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:58.424 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:58.424 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:58.424 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:58.424 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:58.424 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:58.425 1+0 records in 00:28:58.425 1+0 records out 00:28:58.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333569 s, 12.3 MB/s 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:58.425 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:58.728 02:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:59.305 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:59.564 02:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:59.823 [2024-07-11 02:34:50.009881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:59.823 [2024-07-11 02:34:50.009979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.823 [2024-07-11 02:34:50.010033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc07d30 00:28:59.823 [2024-07-11 02:34:50.010058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.823 [2024-07-11 02:34:50.012013] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.823 [2024-07-11 02:34:50.012051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:59.823 [2024-07-11 02:34:50.012165] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:59.823 [2024-07-11 02:34:50.012208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:59.823 [2024-07-11 02:34:50.012314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:59.823 [2024-07-11 02:34:50.012386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:59.823 spare 00:28:59.823 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:59.823 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.824 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.824 [2024-07-11 02:34:50.112708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc07080 00:28:59.824 [2024-07-11 02:34:50.112728] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:59.824 [2024-07-11 02:34:50.112943] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc07b00 00:28:59.824 [2024-07-11 02:34:50.113108] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc07080 00:28:59.824 [2024-07-11 02:34:50.113119] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc07080 00:28:59.824 [2024-07-11 02:34:50.113229] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:00.083 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.083 "name": "raid_bdev1", 00:29:00.083 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:00.083 "strip_size_kb": 0, 00:29:00.083 "state": "online", 00:29:00.083 "raid_level": "raid1", 00:29:00.083 "superblock": true, 00:29:00.083 "num_base_bdevs": 4, 00:29:00.083 "num_base_bdevs_discovered": 3, 00:29:00.083 "num_base_bdevs_operational": 3, 00:29:00.083 "base_bdevs_list": [ 00:29:00.083 { 00:29:00.083 "name": "spare", 00:29:00.083 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:29:00.083 "is_configured": true, 00:29:00.083 "data_offset": 2048, 00:29:00.083 "data_size": 63488 00:29:00.083 }, 00:29:00.083 { 00:29:00.083 "name": null, 00:29:00.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.083 "is_configured": false, 00:29:00.083 "data_offset": 2048, 00:29:00.083 "data_size": 63488 00:29:00.083 }, 00:29:00.083 { 00:29:00.083 "name": "BaseBdev3", 00:29:00.083 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:00.083 "is_configured": true, 00:29:00.083 "data_offset": 2048, 00:29:00.083 "data_size": 63488 00:29:00.083 }, 00:29:00.083 { 00:29:00.083 "name": "BaseBdev4", 00:29:00.083 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:00.083 "is_configured": true, 00:29:00.083 "data_offset": 2048, 00:29:00.083 "data_size": 63488 00:29:00.083 } 00:29:00.083 ] 00:29:00.083 }' 00:29:00.083 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.083 02:34:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.652 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:00.652 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:00.652 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:00.652 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:00.652 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:00.652 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.652 02:34:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.652 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:00.652 "name": "raid_bdev1", 00:29:00.652 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:00.652 "strip_size_kb": 0, 00:29:00.652 "state": "online", 00:29:00.652 "raid_level": "raid1", 00:29:00.652 "superblock": true, 00:29:00.652 "num_base_bdevs": 4, 00:29:00.652 "num_base_bdevs_discovered": 3, 00:29:00.652 "num_base_bdevs_operational": 3, 00:29:00.652 "base_bdevs_list": [ 00:29:00.652 { 00:29:00.652 "name": "spare", 00:29:00.652 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:29:00.652 "is_configured": true, 00:29:00.652 "data_offset": 2048, 00:29:00.652 "data_size": 63488 00:29:00.652 }, 00:29:00.652 { 00:29:00.652 "name": null, 00:29:00.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.652 "is_configured": false, 00:29:00.652 "data_offset": 2048, 00:29:00.652 "data_size": 63488 00:29:00.652 }, 00:29:00.652 { 00:29:00.652 "name": "BaseBdev3", 00:29:00.652 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:00.652 "is_configured": true, 00:29:00.652 "data_offset": 2048, 00:29:00.652 "data_size": 63488 00:29:00.652 }, 00:29:00.652 { 00:29:00.652 "name": "BaseBdev4", 00:29:00.652 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:00.652 "is_configured": true, 00:29:00.652 "data_offset": 2048, 00:29:00.652 "data_size": 63488 00:29:00.652 } 00:29:00.652 ] 00:29:00.652 }' 00:29:00.652 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:00.911 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:00.911 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:00.911 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:00.911 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.911 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:01.170 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:01.170 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:01.429 [2024-07-11 02:34:51.646319] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.429 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.430 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.430 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.689 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:01.689 "name": "raid_bdev1", 00:29:01.689 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:01.689 "strip_size_kb": 0, 00:29:01.689 "state": "online", 00:29:01.689 "raid_level": "raid1", 00:29:01.689 "superblock": true, 00:29:01.689 "num_base_bdevs": 4, 00:29:01.689 "num_base_bdevs_discovered": 2, 00:29:01.689 "num_base_bdevs_operational": 2, 00:29:01.689 "base_bdevs_list": [ 00:29:01.689 { 00:29:01.689 "name": null, 00:29:01.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.690 "is_configured": false, 00:29:01.690 "data_offset": 2048, 00:29:01.690 "data_size": 63488 00:29:01.690 }, 00:29:01.690 { 00:29:01.690 "name": null, 00:29:01.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.690 "is_configured": false, 00:29:01.690 "data_offset": 2048, 00:29:01.690 "data_size": 63488 00:29:01.690 }, 00:29:01.690 { 00:29:01.690 "name": "BaseBdev3", 00:29:01.690 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:01.690 "is_configured": true, 00:29:01.690 "data_offset": 2048, 00:29:01.690 "data_size": 63488 00:29:01.690 }, 00:29:01.690 { 00:29:01.690 "name": "BaseBdev4", 00:29:01.690 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:01.690 "is_configured": true, 00:29:01.690 "data_offset": 2048, 00:29:01.690 "data_size": 63488 00:29:01.690 } 00:29:01.690 ] 00:29:01.690 }' 00:29:01.690 02:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:01.690 02:34:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.258 02:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:02.517 [2024-07-11 02:34:52.701149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:02.517 [2024-07-11 02:34:52.701288] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:02.517 [2024-07-11 02:34:52.701304] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:02.517 [2024-07-11 02:34:52.701333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:02.517 [2024-07-11 02:34:52.705125] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa66a00 00:29:02.517 [2024-07-11 02:34:52.707371] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:02.517 02:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:03.453 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:03.453 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:03.453 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:03.453 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:03.453 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:03.453 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.453 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.711 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:03.711 "name": "raid_bdev1", 00:29:03.711 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:03.711 "strip_size_kb": 0, 00:29:03.711 "state": "online", 00:29:03.711 "raid_level": "raid1", 00:29:03.711 "superblock": true, 00:29:03.711 "num_base_bdevs": 4, 00:29:03.711 "num_base_bdevs_discovered": 3, 00:29:03.711 "num_base_bdevs_operational": 3, 00:29:03.711 "process": { 00:29:03.711 "type": "rebuild", 00:29:03.711 "target": "spare", 00:29:03.711 "progress": { 00:29:03.711 "blocks": 24576, 00:29:03.711 "percent": 38 00:29:03.711 } 00:29:03.711 }, 00:29:03.711 "base_bdevs_list": [ 00:29:03.711 { 00:29:03.711 "name": "spare", 00:29:03.711 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:29:03.711 "is_configured": true, 00:29:03.711 "data_offset": 2048, 00:29:03.711 "data_size": 63488 00:29:03.711 }, 00:29:03.711 { 00:29:03.711 "name": null, 00:29:03.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.711 "is_configured": false, 00:29:03.711 "data_offset": 2048, 00:29:03.711 "data_size": 63488 00:29:03.711 }, 00:29:03.711 { 00:29:03.711 "name": "BaseBdev3", 00:29:03.711 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:03.711 "is_configured": true, 00:29:03.711 "data_offset": 2048, 00:29:03.711 "data_size": 63488 00:29:03.711 }, 00:29:03.711 { 00:29:03.711 "name": "BaseBdev4", 00:29:03.711 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:03.711 "is_configured": true, 00:29:03.711 "data_offset": 2048, 00:29:03.711 "data_size": 63488 00:29:03.711 } 00:29:03.711 ] 00:29:03.711 }' 00:29:03.711 02:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:03.711 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:03.711 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:03.711 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.711 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:03.969 [2024-07-11 02:34:54.318539] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.969 [2024-07-11 02:34:54.319865] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:03.969 [2024-07-11 02:34:54.319906] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:03.969 [2024-07-11 02:34:54.319922] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.969 [2024-07-11 02:34:54.319930] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.969 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.228 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.228 "name": "raid_bdev1", 00:29:04.228 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:04.228 "strip_size_kb": 0, 00:29:04.228 "state": "online", 00:29:04.228 "raid_level": "raid1", 00:29:04.228 "superblock": true, 00:29:04.228 "num_base_bdevs": 4, 00:29:04.228 "num_base_bdevs_discovered": 2, 00:29:04.228 "num_base_bdevs_operational": 2, 00:29:04.228 "base_bdevs_list": [ 00:29:04.228 { 00:29:04.228 "name": null, 00:29:04.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.228 "is_configured": false, 00:29:04.228 "data_offset": 2048, 00:29:04.228 "data_size": 63488 00:29:04.228 }, 00:29:04.228 { 00:29:04.228 "name": null, 00:29:04.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.228 "is_configured": false, 00:29:04.228 "data_offset": 2048, 00:29:04.228 "data_size": 63488 00:29:04.228 }, 00:29:04.228 { 00:29:04.228 "name": "BaseBdev3", 00:29:04.228 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:04.228 "is_configured": true, 00:29:04.228 "data_offset": 2048, 00:29:04.228 "data_size": 63488 00:29:04.228 }, 00:29:04.228 { 00:29:04.228 "name": "BaseBdev4", 00:29:04.228 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:04.228 "is_configured": true, 00:29:04.228 "data_offset": 2048, 00:29:04.228 "data_size": 63488 00:29:04.228 } 00:29:04.228 ] 00:29:04.228 }' 00:29:04.228 02:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.228 02:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.795 02:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:05.054 [2024-07-11 02:34:55.422681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:05.054 [2024-07-11 02:34:55.422731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.054 [2024-07-11 02:34:55.422753] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc074f0 00:29:05.054 [2024-07-11 02:34:55.422771] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.054 [2024-07-11 02:34:55.423123] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.054 [2024-07-11 02:34:55.423142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:05.054 [2024-07-11 02:34:55.423219] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:05.054 [2024-07-11 02:34:55.423232] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:05.054 [2024-07-11 02:34:55.423248] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:05.054 [2024-07-11 02:34:55.423267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:05.054 [2024-07-11 02:34:55.427113] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc07b00 00:29:05.054 [2024-07-11 02:34:55.428596] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:05.054 spare 00:29:05.054 02:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:06.431 "name": "raid_bdev1", 00:29:06.431 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:06.431 "strip_size_kb": 0, 00:29:06.431 "state": "online", 00:29:06.431 "raid_level": "raid1", 00:29:06.431 "superblock": true, 00:29:06.431 "num_base_bdevs": 4, 00:29:06.431 "num_base_bdevs_discovered": 3, 00:29:06.431 "num_base_bdevs_operational": 3, 00:29:06.431 "process": { 00:29:06.431 "type": "rebuild", 00:29:06.431 "target": "spare", 00:29:06.431 "progress": { 00:29:06.431 "blocks": 22528, 00:29:06.431 "percent": 35 00:29:06.431 } 00:29:06.431 }, 00:29:06.431 "base_bdevs_list": [ 00:29:06.431 { 00:29:06.431 "name": "spare", 00:29:06.431 "uuid": "67ebe090-b1a0-5142-9741-019b3eef9262", 00:29:06.431 "is_configured": true, 00:29:06.431 "data_offset": 2048, 00:29:06.431 "data_size": 63488 00:29:06.431 }, 00:29:06.431 { 00:29:06.431 "name": null, 00:29:06.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.431 "is_configured": false, 00:29:06.431 "data_offset": 2048, 00:29:06.431 "data_size": 63488 00:29:06.431 }, 00:29:06.431 { 00:29:06.431 "name": "BaseBdev3", 00:29:06.431 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:06.431 "is_configured": true, 00:29:06.431 "data_offset": 2048, 00:29:06.431 "data_size": 63488 00:29:06.431 }, 00:29:06.431 { 00:29:06.431 "name": "BaseBdev4", 00:29:06.431 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:06.431 "is_configured": true, 00:29:06.431 "data_offset": 2048, 00:29:06.431 "data_size": 63488 00:29:06.431 } 00:29:06.431 ] 00:29:06.431 }' 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:06.431 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:06.432 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:06.432 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:06.432 02:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:06.690 [2024-07-11 02:34:56.980072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.690 [2024-07-11 02:34:57.041213] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:06.690 [2024-07-11 02:34:57.041257] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.690 [2024-07-11 02:34:57.041273] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.690 [2024-07-11 02:34:57.041281] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:06.690 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:06.690 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.690 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.691 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.949 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.949 "name": "raid_bdev1", 00:29:06.949 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:06.949 "strip_size_kb": 0, 00:29:06.949 "state": "online", 00:29:06.949 "raid_level": "raid1", 00:29:06.949 "superblock": true, 00:29:06.949 "num_base_bdevs": 4, 00:29:06.949 "num_base_bdevs_discovered": 2, 00:29:06.949 "num_base_bdevs_operational": 2, 00:29:06.949 "base_bdevs_list": [ 00:29:06.949 { 00:29:06.949 "name": null, 00:29:06.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.949 "is_configured": false, 00:29:06.949 "data_offset": 2048, 00:29:06.949 "data_size": 63488 00:29:06.949 }, 00:29:06.950 { 00:29:06.950 "name": null, 00:29:06.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.950 "is_configured": false, 00:29:06.950 "data_offset": 2048, 00:29:06.950 "data_size": 63488 00:29:06.950 }, 00:29:06.950 { 00:29:06.950 "name": "BaseBdev3", 00:29:06.950 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:06.950 "is_configured": true, 00:29:06.950 "data_offset": 2048, 00:29:06.950 "data_size": 63488 00:29:06.950 }, 00:29:06.950 { 00:29:06.950 "name": "BaseBdev4", 00:29:06.950 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:06.950 "is_configured": true, 00:29:06.950 "data_offset": 2048, 00:29:06.950 "data_size": 63488 00:29:06.950 } 00:29:06.950 ] 00:29:06.950 }' 00:29:06.950 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.950 02:34:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.517 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.517 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:07.517 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:07.517 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:07.517 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:07.517 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.517 02:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.776 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:07.776 "name": "raid_bdev1", 00:29:07.776 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:07.776 "strip_size_kb": 0, 00:29:07.776 "state": "online", 00:29:07.776 "raid_level": "raid1", 00:29:07.776 "superblock": true, 00:29:07.776 "num_base_bdevs": 4, 00:29:07.776 "num_base_bdevs_discovered": 2, 00:29:07.776 "num_base_bdevs_operational": 2, 00:29:07.776 "base_bdevs_list": [ 00:29:07.776 { 00:29:07.776 "name": null, 00:29:07.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.776 "is_configured": false, 00:29:07.776 "data_offset": 2048, 00:29:07.776 "data_size": 63488 00:29:07.776 }, 00:29:07.776 { 00:29:07.776 "name": null, 00:29:07.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.776 "is_configured": false, 00:29:07.776 "data_offset": 2048, 00:29:07.776 "data_size": 63488 00:29:07.776 }, 00:29:07.776 { 00:29:07.776 "name": "BaseBdev3", 00:29:07.776 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:07.776 "is_configured": true, 00:29:07.776 "data_offset": 2048, 00:29:07.776 "data_size": 63488 00:29:07.776 }, 00:29:07.776 { 00:29:07.776 "name": "BaseBdev4", 00:29:07.776 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:07.776 "is_configured": true, 00:29:07.776 "data_offset": 2048, 00:29:07.776 "data_size": 63488 00:29:07.776 } 00:29:07.776 ] 00:29:07.776 }' 00:29:07.776 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:08.034 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:08.034 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:08.034 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:08.034 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:08.293 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:08.293 [2024-07-11 02:34:58.617944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:08.293 [2024-07-11 02:34:58.617990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.293 [2024-07-11 02:34:58.618011] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc19220 00:29:08.293 [2024-07-11 02:34:58.618023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.293 [2024-07-11 02:34:58.618343] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.293 [2024-07-11 02:34:58.618362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:08.293 [2024-07-11 02:34:58.618421] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:08.293 [2024-07-11 02:34:58.618433] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:08.293 [2024-07-11 02:34:58.618444] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:08.293 BaseBdev1 00:29:08.293 02:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.668 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:09.668 "name": "raid_bdev1", 00:29:09.668 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:09.668 "strip_size_kb": 0, 00:29:09.668 "state": "online", 00:29:09.668 "raid_level": "raid1", 00:29:09.668 "superblock": true, 00:29:09.668 "num_base_bdevs": 4, 00:29:09.668 "num_base_bdevs_discovered": 2, 00:29:09.668 "num_base_bdevs_operational": 2, 00:29:09.668 "base_bdevs_list": [ 00:29:09.668 { 00:29:09.668 "name": null, 00:29:09.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.668 "is_configured": false, 00:29:09.668 "data_offset": 2048, 00:29:09.669 "data_size": 63488 00:29:09.669 }, 00:29:09.669 { 00:29:09.669 "name": null, 00:29:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.669 "is_configured": false, 00:29:09.669 "data_offset": 2048, 00:29:09.669 "data_size": 63488 00:29:09.669 }, 00:29:09.669 { 00:29:09.669 "name": "BaseBdev3", 00:29:09.669 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:09.669 "is_configured": true, 00:29:09.669 "data_offset": 2048, 00:29:09.669 "data_size": 63488 00:29:09.669 }, 00:29:09.669 { 00:29:09.669 "name": "BaseBdev4", 00:29:09.669 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:09.669 "is_configured": true, 00:29:09.669 "data_offset": 2048, 00:29:09.669 "data_size": 63488 00:29:09.669 } 00:29:09.669 ] 00:29:09.669 }' 00:29:09.669 02:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:09.669 02:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:10.236 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:10.236 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:10.236 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:10.236 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:10.236 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:10.236 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.236 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.495 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:10.495 "name": "raid_bdev1", 00:29:10.495 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:10.495 "strip_size_kb": 0, 00:29:10.495 "state": "online", 00:29:10.495 "raid_level": "raid1", 00:29:10.495 "superblock": true, 00:29:10.495 "num_base_bdevs": 4, 00:29:10.495 "num_base_bdevs_discovered": 2, 00:29:10.495 "num_base_bdevs_operational": 2, 00:29:10.495 "base_bdevs_list": [ 00:29:10.495 { 00:29:10.495 "name": null, 00:29:10.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.495 "is_configured": false, 00:29:10.495 "data_offset": 2048, 00:29:10.495 "data_size": 63488 00:29:10.495 }, 00:29:10.495 { 00:29:10.495 "name": null, 00:29:10.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.495 "is_configured": false, 00:29:10.495 "data_offset": 2048, 00:29:10.495 "data_size": 63488 00:29:10.495 }, 00:29:10.495 { 00:29:10.495 "name": "BaseBdev3", 00:29:10.496 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:10.496 "is_configured": true, 00:29:10.496 "data_offset": 2048, 00:29:10.496 "data_size": 63488 00:29:10.496 }, 00:29:10.496 { 00:29:10.496 "name": "BaseBdev4", 00:29:10.496 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:10.496 "is_configured": true, 00:29:10.496 "data_offset": 2048, 00:29:10.496 "data_size": 63488 00:29:10.496 } 00:29:10.496 ] 00:29:10.496 }' 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:10.496 02:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.754 [2024-07-11 02:35:00.984302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:10.754 [2024-07-11 02:35:00.984419] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:10.754 [2024-07-11 02:35:00.984435] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:10.754 request: 00:29:10.754 { 00:29:10.754 "base_bdev": "BaseBdev1", 00:29:10.754 "raid_bdev": "raid_bdev1", 00:29:10.754 "method": "bdev_raid_add_base_bdev", 00:29:10.754 "req_id": 1 00:29:10.754 } 00:29:10.754 Got JSON-RPC error response 00:29:10.754 response: 00:29:10.754 { 00:29:10.754 "code": -22, 00:29:10.754 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:10.754 } 00:29:10.754 02:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:29:10.754 02:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:10.754 02:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:10.754 02:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:10.754 02:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.695 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.954 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.954 "name": "raid_bdev1", 00:29:11.954 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:11.954 "strip_size_kb": 0, 00:29:11.954 "state": "online", 00:29:11.954 "raid_level": "raid1", 00:29:11.954 "superblock": true, 00:29:11.954 "num_base_bdevs": 4, 00:29:11.954 "num_base_bdevs_discovered": 2, 00:29:11.954 "num_base_bdevs_operational": 2, 00:29:11.954 "base_bdevs_list": [ 00:29:11.954 { 00:29:11.954 "name": null, 00:29:11.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.954 "is_configured": false, 00:29:11.954 "data_offset": 2048, 00:29:11.954 "data_size": 63488 00:29:11.954 }, 00:29:11.954 { 00:29:11.954 "name": null, 00:29:11.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.954 "is_configured": false, 00:29:11.954 "data_offset": 2048, 00:29:11.954 "data_size": 63488 00:29:11.954 }, 00:29:11.954 { 00:29:11.954 "name": "BaseBdev3", 00:29:11.954 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:11.954 "is_configured": true, 00:29:11.954 "data_offset": 2048, 00:29:11.954 "data_size": 63488 00:29:11.954 }, 00:29:11.954 { 00:29:11.954 "name": "BaseBdev4", 00:29:11.954 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:11.954 "is_configured": true, 00:29:11.954 "data_offset": 2048, 00:29:11.954 "data_size": 63488 00:29:11.954 } 00:29:11.954 ] 00:29:11.954 }' 00:29:11.954 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.954 02:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:12.523 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:12.523 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.523 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:12.523 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:12.523 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.523 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.523 02:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.782 "name": "raid_bdev1", 00:29:12.782 "uuid": "8b7753d5-03aa-4871-8b97-d11df120db6f", 00:29:12.782 "strip_size_kb": 0, 00:29:12.782 "state": "online", 00:29:12.782 "raid_level": "raid1", 00:29:12.782 "superblock": true, 00:29:12.782 "num_base_bdevs": 4, 00:29:12.782 "num_base_bdevs_discovered": 2, 00:29:12.782 "num_base_bdevs_operational": 2, 00:29:12.782 "base_bdevs_list": [ 00:29:12.782 { 00:29:12.782 "name": null, 00:29:12.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.782 "is_configured": false, 00:29:12.782 "data_offset": 2048, 00:29:12.782 "data_size": 63488 00:29:12.782 }, 00:29:12.782 { 00:29:12.782 "name": null, 00:29:12.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.782 "is_configured": false, 00:29:12.782 "data_offset": 2048, 00:29:12.782 "data_size": 63488 00:29:12.782 }, 00:29:12.782 { 00:29:12.782 "name": "BaseBdev3", 00:29:12.782 "uuid": "364c3dee-d45b-58f7-962d-d89db6ba8157", 00:29:12.782 "is_configured": true, 00:29:12.782 "data_offset": 2048, 00:29:12.782 "data_size": 63488 00:29:12.782 }, 00:29:12.782 { 00:29:12.782 "name": "BaseBdev4", 00:29:12.782 "uuid": "74e42a99-0296-5ce6-a67f-bb3ba8645774", 00:29:12.782 "is_configured": true, 00:29:12.782 "data_offset": 2048, 00:29:12.782 "data_size": 63488 00:29:12.782 } 00:29:12.782 ] 00:29:12.782 }' 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2026890 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2026890 ']' 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2026890 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.782 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2026890 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2026890' 00:29:13.041 killing process with pid 2026890 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2026890 00:29:13.041 Received shutdown signal, test time was about 60.000000 seconds 00:29:13.041 00:29:13.041 Latency(us) 00:29:13.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.041 =================================================================================================================== 00:29:13.041 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:13.041 [2024-07-11 02:35:03.211230] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:13.041 [2024-07-11 02:35:03.211317] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:13.041 [2024-07-11 02:35:03.211371] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:13.041 [2024-07-11 02:35:03.211383] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc07080 name raid_bdev1, state offline 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2026890 00:29:13.041 [2024-07-11 02:35:03.256095] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:29:13.041 00:29:13.041 real 0m38.730s 00:29:13.041 user 0m55.617s 00:29:13.041 sys 0m7.354s 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:13.041 02:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.041 ************************************ 00:29:13.041 END TEST raid_rebuild_test_sb 00:29:13.041 ************************************ 00:29:13.300 02:35:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:13.300 02:35:03 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:29:13.300 02:35:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:13.300 02:35:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.300 02:35:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:13.300 ************************************ 00:29:13.300 START TEST raid_rebuild_test_io 00:29:13.300 ************************************ 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.300 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2032323 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2032323 /var/tmp/spdk-raid.sock 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2032323 ']' 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:13.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.301 02:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.301 [2024-07-11 02:35:03.596577] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:13.301 [2024-07-11 02:35:03.596648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032323 ] 00:29:13.301 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:13.301 Zero copy mechanism will not be used. 00:29:13.559 [2024-07-11 02:35:03.737246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.559 [2024-07-11 02:35:03.789246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.559 [2024-07-11 02:35:03.850745] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:13.559 [2024-07-11 02:35:03.850787] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:14.126 02:35:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.126 02:35:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:29:14.126 02:35:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:14.126 02:35:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:14.385 BaseBdev1_malloc 00:29:14.385 02:35:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:14.643 [2024-07-11 02:35:04.942439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:14.643 [2024-07-11 02:35:04.942487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.643 [2024-07-11 02:35:04.942509] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf27ee0 00:29:14.643 [2024-07-11 02:35:04.942522] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.643 [2024-07-11 02:35:04.944063] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.643 [2024-07-11 02:35:04.944093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:14.643 BaseBdev1 00:29:14.643 02:35:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:14.643 02:35:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:14.901 BaseBdev2_malloc 00:29:14.901 02:35:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:15.160 [2024-07-11 02:35:05.380690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:15.160 [2024-07-11 02:35:05.380734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.160 [2024-07-11 02:35:05.380754] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf29870 00:29:15.160 [2024-07-11 02:35:05.380776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.160 [2024-07-11 02:35:05.382110] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.160 [2024-07-11 02:35:05.382138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:15.160 BaseBdev2 00:29:15.160 02:35:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:15.160 02:35:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:15.160 BaseBdev3_malloc 00:29:15.419 02:35:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:15.419 [2024-07-11 02:35:05.730154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:15.419 [2024-07-11 02:35:05.730197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.419 [2024-07-11 02:35:05.730217] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf20b20 00:29:15.419 [2024-07-11 02:35:05.730229] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.419 [2024-07-11 02:35:05.731549] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.419 [2024-07-11 02:35:05.731576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:15.419 BaseBdev3 00:29:15.419 02:35:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:15.419 02:35:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:15.677 BaseBdev4_malloc 00:29:15.678 02:35:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:15.678 [2024-07-11 02:35:06.079437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:15.678 [2024-07-11 02:35:06.079478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.678 [2024-07-11 02:35:06.079499] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf248d0 00:29:15.678 [2024-07-11 02:35:06.079511] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.678 [2024-07-11 02:35:06.080842] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.678 [2024-07-11 02:35:06.080870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:15.678 BaseBdev4 00:29:15.937 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:15.937 spare_malloc 00:29:15.937 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:16.196 spare_delay 00:29:16.196 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:16.455 [2024-07-11 02:35:06.765633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:16.455 [2024-07-11 02:35:06.765677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.455 [2024-07-11 02:35:06.765698] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd768a0 00:29:16.455 [2024-07-11 02:35:06.765711] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.455 [2024-07-11 02:35:06.767076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.455 [2024-07-11 02:35:06.767105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:16.455 spare 00:29:16.455 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:16.714 [2024-07-11 02:35:06.942123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:16.714 [2024-07-11 02:35:06.943232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:16.714 [2024-07-11 02:35:06.943281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:16.714 [2024-07-11 02:35:06.943327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:16.714 [2024-07-11 02:35:06.943402] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd77300 00:29:16.714 [2024-07-11 02:35:06.943412] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:16.714 [2024-07-11 02:35:06.943592] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf18dd0 00:29:16.714 [2024-07-11 02:35:06.943729] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd77300 00:29:16.714 [2024-07-11 02:35:06.943739] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd77300 00:29:16.714 [2024-07-11 02:35:06.943851] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.714 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:16.714 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.715 02:35:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.974 02:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.974 "name": "raid_bdev1", 00:29:16.974 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:16.974 "strip_size_kb": 0, 00:29:16.974 "state": "online", 00:29:16.974 "raid_level": "raid1", 00:29:16.974 "superblock": false, 00:29:16.974 "num_base_bdevs": 4, 00:29:16.974 "num_base_bdevs_discovered": 4, 00:29:16.974 "num_base_bdevs_operational": 4, 00:29:16.974 "base_bdevs_list": [ 00:29:16.974 { 00:29:16.974 "name": "BaseBdev1", 00:29:16.974 "uuid": "e4bb40e9-91a7-50bd-8a27-564e5d93039b", 00:29:16.974 "is_configured": true, 00:29:16.974 "data_offset": 0, 00:29:16.974 "data_size": 65536 00:29:16.974 }, 00:29:16.974 { 00:29:16.974 "name": "BaseBdev2", 00:29:16.974 "uuid": "d4a89dde-228a-56ab-a745-66ce0034324c", 00:29:16.974 "is_configured": true, 00:29:16.974 "data_offset": 0, 00:29:16.974 "data_size": 65536 00:29:16.974 }, 00:29:16.974 { 00:29:16.974 "name": "BaseBdev3", 00:29:16.974 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:16.974 "is_configured": true, 00:29:16.974 "data_offset": 0, 00:29:16.974 "data_size": 65536 00:29:16.974 }, 00:29:16.974 { 00:29:16.974 "name": "BaseBdev4", 00:29:16.974 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:16.974 "is_configured": true, 00:29:16.974 "data_offset": 0, 00:29:16.974 "data_size": 65536 00:29:16.974 } 00:29:16.974 ] 00:29:16.974 }' 00:29:16.974 02:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.974 02:35:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:17.541 02:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:17.541 02:35:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:17.800 [2024-07-11 02:35:08.041358] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:17.800 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:17.800 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.800 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:18.059 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:18.059 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:18.059 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:18.059 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:18.059 [2024-07-11 02:35:08.424078] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf174a0 00:29:18.059 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:18.059 Zero copy mechanism will not be used. 00:29:18.059 Running I/O for 60 seconds... 00:29:18.319 [2024-07-11 02:35:08.550309] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:18.319 [2024-07-11 02:35:08.558480] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf174a0 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.319 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.578 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.578 "name": "raid_bdev1", 00:29:18.578 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:18.578 "strip_size_kb": 0, 00:29:18.578 "state": "online", 00:29:18.578 "raid_level": "raid1", 00:29:18.578 "superblock": false, 00:29:18.578 "num_base_bdevs": 4, 00:29:18.578 "num_base_bdevs_discovered": 3, 00:29:18.578 "num_base_bdevs_operational": 3, 00:29:18.578 "base_bdevs_list": [ 00:29:18.578 { 00:29:18.578 "name": null, 00:29:18.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.578 "is_configured": false, 00:29:18.578 "data_offset": 0, 00:29:18.578 "data_size": 65536 00:29:18.578 }, 00:29:18.578 { 00:29:18.578 "name": "BaseBdev2", 00:29:18.578 "uuid": "d4a89dde-228a-56ab-a745-66ce0034324c", 00:29:18.578 "is_configured": true, 00:29:18.578 "data_offset": 0, 00:29:18.578 "data_size": 65536 00:29:18.578 }, 00:29:18.578 { 00:29:18.578 "name": "BaseBdev3", 00:29:18.578 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:18.578 "is_configured": true, 00:29:18.578 "data_offset": 0, 00:29:18.578 "data_size": 65536 00:29:18.578 }, 00:29:18.578 { 00:29:18.578 "name": "BaseBdev4", 00:29:18.578 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:18.578 "is_configured": true, 00:29:18.578 "data_offset": 0, 00:29:18.578 "data_size": 65536 00:29:18.578 } 00:29:18.578 ] 00:29:18.578 }' 00:29:18.578 02:35:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.578 02:35:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.147 02:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:19.406 [2024-07-11 02:35:09.718018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:19.406 02:35:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:19.406 [2024-07-11 02:35:09.800547] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf176d0 00:29:19.406 [2024-07-11 02:35:09.802940] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:19.665 [2024-07-11 02:35:09.934547] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:19.665 [2024-07-11 02:35:09.934853] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:19.665 [2024-07-11 02:35:10.056312] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:19.665 [2024-07-11 02:35:10.056622] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:20.234 [2024-07-11 02:35:10.405535] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:20.234 [2024-07-11 02:35:10.406085] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:20.234 [2024-07-11 02:35:10.611227] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:20.234 [2024-07-11 02:35:10.611880] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:20.493 02:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:20.493 02:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:20.493 02:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:20.493 02:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:20.493 02:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:20.493 02:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.493 02:35:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.771 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:20.771 "name": "raid_bdev1", 00:29:20.771 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:20.771 "strip_size_kb": 0, 00:29:20.771 "state": "online", 00:29:20.771 "raid_level": "raid1", 00:29:20.771 "superblock": false, 00:29:20.771 "num_base_bdevs": 4, 00:29:20.771 "num_base_bdevs_discovered": 4, 00:29:20.771 "num_base_bdevs_operational": 4, 00:29:20.771 "process": { 00:29:20.771 "type": "rebuild", 00:29:20.771 "target": "spare", 00:29:20.771 "progress": { 00:29:20.771 "blocks": 14336, 00:29:20.771 "percent": 21 00:29:20.771 } 00:29:20.771 }, 00:29:20.771 "base_bdevs_list": [ 00:29:20.771 { 00:29:20.771 "name": "spare", 00:29:20.771 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:20.771 "is_configured": true, 00:29:20.771 "data_offset": 0, 00:29:20.771 "data_size": 65536 00:29:20.771 }, 00:29:20.771 { 00:29:20.771 "name": "BaseBdev2", 00:29:20.771 "uuid": "d4a89dde-228a-56ab-a745-66ce0034324c", 00:29:20.771 "is_configured": true, 00:29:20.771 "data_offset": 0, 00:29:20.771 "data_size": 65536 00:29:20.771 }, 00:29:20.771 { 00:29:20.771 "name": "BaseBdev3", 00:29:20.771 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:20.771 "is_configured": true, 00:29:20.771 "data_offset": 0, 00:29:20.771 "data_size": 65536 00:29:20.771 }, 00:29:20.771 { 00:29:20.771 "name": "BaseBdev4", 00:29:20.771 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:20.771 "is_configured": true, 00:29:20.771 "data_offset": 0, 00:29:20.771 "data_size": 65536 00:29:20.771 } 00:29:20.771 ] 00:29:20.771 }' 00:29:20.771 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:20.771 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:20.771 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:20.771 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:20.771 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:21.089 [2024-07-11 02:35:11.355722] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:21.089 [2024-07-11 02:35:11.441551] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:21.351 [2024-07-11 02:35:11.560897] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:21.351 [2024-07-11 02:35:11.565471] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.351 [2024-07-11 02:35:11.565509] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:21.351 [2024-07-11 02:35:11.565520] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:21.351 [2024-07-11 02:35:11.598331] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf174a0 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:21.351 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:21.352 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.352 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.611 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.611 "name": "raid_bdev1", 00:29:21.611 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:21.611 "strip_size_kb": 0, 00:29:21.611 "state": "online", 00:29:21.611 "raid_level": "raid1", 00:29:21.611 "superblock": false, 00:29:21.611 "num_base_bdevs": 4, 00:29:21.611 "num_base_bdevs_discovered": 3, 00:29:21.611 "num_base_bdevs_operational": 3, 00:29:21.611 "base_bdevs_list": [ 00:29:21.611 { 00:29:21.611 "name": null, 00:29:21.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.611 "is_configured": false, 00:29:21.611 "data_offset": 0, 00:29:21.611 "data_size": 65536 00:29:21.611 }, 00:29:21.611 { 00:29:21.611 "name": "BaseBdev2", 00:29:21.611 "uuid": "d4a89dde-228a-56ab-a745-66ce0034324c", 00:29:21.611 "is_configured": true, 00:29:21.611 "data_offset": 0, 00:29:21.611 "data_size": 65536 00:29:21.611 }, 00:29:21.611 { 00:29:21.611 "name": "BaseBdev3", 00:29:21.611 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:21.611 "is_configured": true, 00:29:21.611 "data_offset": 0, 00:29:21.611 "data_size": 65536 00:29:21.611 }, 00:29:21.611 { 00:29:21.611 "name": "BaseBdev4", 00:29:21.611 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:21.611 "is_configured": true, 00:29:21.611 "data_offset": 0, 00:29:21.611 "data_size": 65536 00:29:21.611 } 00:29:21.611 ] 00:29:21.611 }' 00:29:21.611 02:35:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.611 02:35:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.179 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:22.179 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:22.179 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:22.179 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:22.179 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:22.179 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.179 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.438 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:22.438 "name": "raid_bdev1", 00:29:22.438 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:22.438 "strip_size_kb": 0, 00:29:22.438 "state": "online", 00:29:22.438 "raid_level": "raid1", 00:29:22.438 "superblock": false, 00:29:22.438 "num_base_bdevs": 4, 00:29:22.438 "num_base_bdevs_discovered": 3, 00:29:22.438 "num_base_bdevs_operational": 3, 00:29:22.438 "base_bdevs_list": [ 00:29:22.438 { 00:29:22.439 "name": null, 00:29:22.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.439 "is_configured": false, 00:29:22.439 "data_offset": 0, 00:29:22.439 "data_size": 65536 00:29:22.439 }, 00:29:22.439 { 00:29:22.439 "name": "BaseBdev2", 00:29:22.439 "uuid": "d4a89dde-228a-56ab-a745-66ce0034324c", 00:29:22.439 "is_configured": true, 00:29:22.439 "data_offset": 0, 00:29:22.439 "data_size": 65536 00:29:22.439 }, 00:29:22.439 { 00:29:22.439 "name": "BaseBdev3", 00:29:22.439 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:22.439 "is_configured": true, 00:29:22.439 "data_offset": 0, 00:29:22.439 "data_size": 65536 00:29:22.439 }, 00:29:22.439 { 00:29:22.439 "name": "BaseBdev4", 00:29:22.439 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:22.439 "is_configured": true, 00:29:22.439 "data_offset": 0, 00:29:22.439 "data_size": 65536 00:29:22.439 } 00:29:22.439 ] 00:29:22.439 }' 00:29:22.439 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:22.439 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:22.439 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:22.439 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:22.439 02:35:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:22.698 [2024-07-11 02:35:13.045881] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:22.698 02:35:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:22.698 [2024-07-11 02:35:13.120401] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf1a4d0 00:29:22.698 [2024-07-11 02:35:13.121895] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:22.958 [2024-07-11 02:35:13.243328] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:22.958 [2024-07-11 02:35:13.243794] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:23.217 [2024-07-11 02:35:13.446754] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:23.217 [2024-07-11 02:35:13.446929] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:23.475 [2024-07-11 02:35:13.680735] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:23.475 [2024-07-11 02:35:13.681218] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:23.475 [2024-07-11 02:35:13.895389] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:23.733 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:23.733 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:23.733 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:23.734 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:23.734 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:23.734 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.734 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.992 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:23.992 "name": "raid_bdev1", 00:29:23.992 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:23.992 "strip_size_kb": 0, 00:29:23.992 "state": "online", 00:29:23.992 "raid_level": "raid1", 00:29:23.992 "superblock": false, 00:29:23.992 "num_base_bdevs": 4, 00:29:23.992 "num_base_bdevs_discovered": 4, 00:29:23.992 "num_base_bdevs_operational": 4, 00:29:23.992 "process": { 00:29:23.992 "type": "rebuild", 00:29:23.992 "target": "spare", 00:29:23.992 "progress": { 00:29:23.992 "blocks": 14336, 00:29:23.992 "percent": 21 00:29:23.992 } 00:29:23.992 }, 00:29:23.992 "base_bdevs_list": [ 00:29:23.992 { 00:29:23.992 "name": "spare", 00:29:23.992 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:23.992 "is_configured": true, 00:29:23.992 "data_offset": 0, 00:29:23.992 "data_size": 65536 00:29:23.992 }, 00:29:23.992 { 00:29:23.992 "name": "BaseBdev2", 00:29:23.992 "uuid": "d4a89dde-228a-56ab-a745-66ce0034324c", 00:29:23.992 "is_configured": true, 00:29:23.992 "data_offset": 0, 00:29:23.992 "data_size": 65536 00:29:23.992 }, 00:29:23.992 { 00:29:23.992 "name": "BaseBdev3", 00:29:23.992 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:23.992 "is_configured": true, 00:29:23.992 "data_offset": 0, 00:29:23.992 "data_size": 65536 00:29:23.992 }, 00:29:23.992 { 00:29:23.992 "name": "BaseBdev4", 00:29:23.992 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:23.992 "is_configured": true, 00:29:23.992 "data_offset": 0, 00:29:23.992 "data_size": 65536 00:29:23.992 } 00:29:23.992 ] 00:29:23.992 }' 00:29:23.992 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:23.992 [2024-07-11 02:35:14.389998] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:23.992 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:23.992 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:24.251 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:24.251 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:24.251 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:24.251 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:24.251 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:24.251 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:24.251 [2024-07-11 02:35:14.637322] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:24.510 [2024-07-11 02:35:14.679097] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:24.510 [2024-07-11 02:35:14.769879] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:24.510 [2024-07-11 02:35:14.878135] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xf174a0 00:29:24.510 [2024-07-11 02:35:14.878164] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xf1a4d0 00:29:24.510 [2024-07-11 02:35:14.880215] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.510 02:35:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.769 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.769 "name": "raid_bdev1", 00:29:24.769 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:24.769 "strip_size_kb": 0, 00:29:24.769 "state": "online", 00:29:24.769 "raid_level": "raid1", 00:29:24.769 "superblock": false, 00:29:24.769 "num_base_bdevs": 4, 00:29:24.769 "num_base_bdevs_discovered": 3, 00:29:24.769 "num_base_bdevs_operational": 3, 00:29:24.769 "process": { 00:29:24.769 "type": "rebuild", 00:29:24.769 "target": "spare", 00:29:24.769 "progress": { 00:29:24.769 "blocks": 26624, 00:29:24.769 "percent": 40 00:29:24.769 } 00:29:24.769 }, 00:29:24.769 "base_bdevs_list": [ 00:29:24.769 { 00:29:24.769 "name": "spare", 00:29:24.769 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:24.769 "is_configured": true, 00:29:24.769 "data_offset": 0, 00:29:24.769 "data_size": 65536 00:29:24.769 }, 00:29:24.769 { 00:29:24.769 "name": null, 00:29:24.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.769 "is_configured": false, 00:29:24.769 "data_offset": 0, 00:29:24.769 "data_size": 65536 00:29:24.769 }, 00:29:24.769 { 00:29:24.769 "name": "BaseBdev3", 00:29:24.769 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:24.769 "is_configured": true, 00:29:24.769 "data_offset": 0, 00:29:24.769 "data_size": 65536 00:29:24.769 }, 00:29:24.769 { 00:29:24.769 "name": "BaseBdev4", 00:29:24.769 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:24.769 "is_configured": true, 00:29:24.769 "data_offset": 0, 00:29:24.769 "data_size": 65536 00:29:24.769 } 00:29:24.769 ] 00:29:24.769 }' 00:29:24.769 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:25.028 [2024-07-11 02:35:15.234475] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:25.028 [2024-07-11 02:35:15.234934] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=968 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.028 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.288 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:25.288 "name": "raid_bdev1", 00:29:25.288 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:25.288 "strip_size_kb": 0, 00:29:25.288 "state": "online", 00:29:25.288 "raid_level": "raid1", 00:29:25.288 "superblock": false, 00:29:25.288 "num_base_bdevs": 4, 00:29:25.288 "num_base_bdevs_discovered": 3, 00:29:25.288 "num_base_bdevs_operational": 3, 00:29:25.288 "process": { 00:29:25.288 "type": "rebuild", 00:29:25.288 "target": "spare", 00:29:25.288 "progress": { 00:29:25.288 "blocks": 30720, 00:29:25.288 "percent": 46 00:29:25.288 } 00:29:25.288 }, 00:29:25.288 "base_bdevs_list": [ 00:29:25.288 { 00:29:25.288 "name": "spare", 00:29:25.288 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:25.288 "is_configured": true, 00:29:25.288 "data_offset": 0, 00:29:25.288 "data_size": 65536 00:29:25.288 }, 00:29:25.288 { 00:29:25.288 "name": null, 00:29:25.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.288 "is_configured": false, 00:29:25.288 "data_offset": 0, 00:29:25.288 "data_size": 65536 00:29:25.288 }, 00:29:25.288 { 00:29:25.288 "name": "BaseBdev3", 00:29:25.288 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:25.288 "is_configured": true, 00:29:25.288 "data_offset": 0, 00:29:25.288 "data_size": 65536 00:29:25.288 }, 00:29:25.288 { 00:29:25.288 "name": "BaseBdev4", 00:29:25.288 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:25.288 "is_configured": true, 00:29:25.288 "data_offset": 0, 00:29:25.288 "data_size": 65536 00:29:25.288 } 00:29:25.288 ] 00:29:25.288 }' 00:29:25.288 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:25.288 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.288 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:25.288 [2024-07-11 02:35:15.577917] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:25.288 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.288 02:35:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:25.288 [2024-07-11 02:35:15.698713] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.226 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.485 [2024-07-11 02:35:16.738649] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:29:26.485 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.485 "name": "raid_bdev1", 00:29:26.485 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:26.485 "strip_size_kb": 0, 00:29:26.485 "state": "online", 00:29:26.485 "raid_level": "raid1", 00:29:26.485 "superblock": false, 00:29:26.485 "num_base_bdevs": 4, 00:29:26.485 "num_base_bdevs_discovered": 3, 00:29:26.485 "num_base_bdevs_operational": 3, 00:29:26.485 "process": { 00:29:26.485 "type": "rebuild", 00:29:26.485 "target": "spare", 00:29:26.485 "progress": { 00:29:26.485 "blocks": 53248, 00:29:26.485 "percent": 81 00:29:26.485 } 00:29:26.485 }, 00:29:26.485 "base_bdevs_list": [ 00:29:26.485 { 00:29:26.485 "name": "spare", 00:29:26.485 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:26.485 "is_configured": true, 00:29:26.485 "data_offset": 0, 00:29:26.485 "data_size": 65536 00:29:26.485 }, 00:29:26.485 { 00:29:26.485 "name": null, 00:29:26.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.485 "is_configured": false, 00:29:26.485 "data_offset": 0, 00:29:26.485 "data_size": 65536 00:29:26.485 }, 00:29:26.485 { 00:29:26.485 "name": "BaseBdev3", 00:29:26.485 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:26.485 "is_configured": true, 00:29:26.485 "data_offset": 0, 00:29:26.485 "data_size": 65536 00:29:26.485 }, 00:29:26.485 { 00:29:26.485 "name": "BaseBdev4", 00:29:26.485 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:26.485 "is_configured": true, 00:29:26.485 "data_offset": 0, 00:29:26.485 "data_size": 65536 00:29:26.485 } 00:29:26.485 ] 00:29:26.485 }' 00:29:26.485 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.485 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.485 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.744 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.744 02:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:27.003 [2024-07-11 02:35:17.423692] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:27.262 [2024-07-11 02:35:17.531942] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:27.262 [2024-07-11 02:35:17.533944] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.831 02:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.831 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:27.831 "name": "raid_bdev1", 00:29:27.831 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:27.831 "strip_size_kb": 0, 00:29:27.831 "state": "online", 00:29:27.831 "raid_level": "raid1", 00:29:27.831 "superblock": false, 00:29:27.831 "num_base_bdevs": 4, 00:29:27.831 "num_base_bdevs_discovered": 3, 00:29:27.831 "num_base_bdevs_operational": 3, 00:29:27.831 "base_bdevs_list": [ 00:29:27.831 { 00:29:27.831 "name": "spare", 00:29:27.831 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:27.831 "is_configured": true, 00:29:27.831 "data_offset": 0, 00:29:27.831 "data_size": 65536 00:29:27.832 }, 00:29:27.832 { 00:29:27.832 "name": null, 00:29:27.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.832 "is_configured": false, 00:29:27.832 "data_offset": 0, 00:29:27.832 "data_size": 65536 00:29:27.832 }, 00:29:27.832 { 00:29:27.832 "name": "BaseBdev3", 00:29:27.832 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:27.832 "is_configured": true, 00:29:27.832 "data_offset": 0, 00:29:27.832 "data_size": 65536 00:29:27.832 }, 00:29:27.832 { 00:29:27.832 "name": "BaseBdev4", 00:29:27.832 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:27.832 "is_configured": true, 00:29:27.832 "data_offset": 0, 00:29:27.832 "data_size": 65536 00:29:27.832 } 00:29:27.832 ] 00:29:27.832 }' 00:29:27.832 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:27.832 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:27.832 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.091 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.351 "name": "raid_bdev1", 00:29:28.351 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:28.351 "strip_size_kb": 0, 00:29:28.351 "state": "online", 00:29:28.351 "raid_level": "raid1", 00:29:28.351 "superblock": false, 00:29:28.351 "num_base_bdevs": 4, 00:29:28.351 "num_base_bdevs_discovered": 3, 00:29:28.351 "num_base_bdevs_operational": 3, 00:29:28.351 "base_bdevs_list": [ 00:29:28.351 { 00:29:28.351 "name": "spare", 00:29:28.351 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:28.351 "is_configured": true, 00:29:28.351 "data_offset": 0, 00:29:28.351 "data_size": 65536 00:29:28.351 }, 00:29:28.351 { 00:29:28.351 "name": null, 00:29:28.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.351 "is_configured": false, 00:29:28.351 "data_offset": 0, 00:29:28.351 "data_size": 65536 00:29:28.351 }, 00:29:28.351 { 00:29:28.351 "name": "BaseBdev3", 00:29:28.351 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:28.351 "is_configured": true, 00:29:28.351 "data_offset": 0, 00:29:28.351 "data_size": 65536 00:29:28.351 }, 00:29:28.351 { 00:29:28.351 "name": "BaseBdev4", 00:29:28.351 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:28.351 "is_configured": true, 00:29:28.351 "data_offset": 0, 00:29:28.351 "data_size": 65536 00:29:28.351 } 00:29:28.351 ] 00:29:28.351 }' 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.351 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.610 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.610 "name": "raid_bdev1", 00:29:28.610 "uuid": "d61106a7-9caa-4d3f-8c2e-604c8055e8d0", 00:29:28.610 "strip_size_kb": 0, 00:29:28.610 "state": "online", 00:29:28.610 "raid_level": "raid1", 00:29:28.610 "superblock": false, 00:29:28.610 "num_base_bdevs": 4, 00:29:28.610 "num_base_bdevs_discovered": 3, 00:29:28.610 "num_base_bdevs_operational": 3, 00:29:28.610 "base_bdevs_list": [ 00:29:28.610 { 00:29:28.610 "name": "spare", 00:29:28.610 "uuid": "c13810b2-0534-5a9b-ae66-2cdfca8aa087", 00:29:28.610 "is_configured": true, 00:29:28.610 "data_offset": 0, 00:29:28.610 "data_size": 65536 00:29:28.610 }, 00:29:28.610 { 00:29:28.610 "name": null, 00:29:28.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.610 "is_configured": false, 00:29:28.610 "data_offset": 0, 00:29:28.610 "data_size": 65536 00:29:28.610 }, 00:29:28.611 { 00:29:28.611 "name": "BaseBdev3", 00:29:28.611 "uuid": "0a5f1fd2-51de-5cb9-b0df-dd5af1230179", 00:29:28.611 "is_configured": true, 00:29:28.611 "data_offset": 0, 00:29:28.611 "data_size": 65536 00:29:28.611 }, 00:29:28.611 { 00:29:28.611 "name": "BaseBdev4", 00:29:28.611 "uuid": "48663af7-d55c-52e6-8350-4215ff2df9bb", 00:29:28.611 "is_configured": true, 00:29:28.611 "data_offset": 0, 00:29:28.611 "data_size": 65536 00:29:28.611 } 00:29:28.611 ] 00:29:28.611 }' 00:29:28.611 02:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.611 02:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.179 02:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:29.439 [2024-07-11 02:35:19.716898] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:29.439 [2024-07-11 02:35:19.716932] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:29.439 00:29:29.439 Latency(us) 00:29:29.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.439 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:29.439 raid_bdev1 : 11.29 92.25 276.76 0.00 0.00 14025.50 295.62 112152.04 00:29:29.439 =================================================================================================================== 00:29:29.439 Total : 92.25 276.76 0.00 0.00 14025.50 295.62 112152.04 00:29:29.439 [2024-07-11 02:35:19.752924] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:29.439 [2024-07-11 02:35:19.752953] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:29.439 [2024-07-11 02:35:19.753051] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:29.439 [2024-07-11 02:35:19.753064] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd77300 name raid_bdev1, state offline 00:29:29.439 0 00:29:29.439 02:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.439 02:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.698 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:29.957 /dev/nbd0 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.957 1+0 records in 00:29:29.957 1+0 records out 00:29:29.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280986 s, 14.6 MB/s 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:29.957 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.958 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:30.217 /dev/nbd1 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.217 1+0 records in 00:29:30.217 1+0 records out 00:29:30.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261274 s, 15.7 MB/s 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.217 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:30.476 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.477 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.477 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.736 02:35:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:30.736 /dev/nbd1 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.995 1+0 records in 00:29:30.995 1+0 records out 00:29:30.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295073 s, 13.9 MB/s 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.995 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.255 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:31.514 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:31.514 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:31.514 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:31.514 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.514 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.514 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:31.514 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2032323 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2032323 ']' 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2032323 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2032323 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2032323' 00:29:31.515 killing process with pid 2032323 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2032323 00:29:31.515 Received shutdown signal, test time was about 13.330580 seconds 00:29:31.515 00:29:31.515 Latency(us) 00:29:31.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.515 =================================================================================================================== 00:29:31.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.515 [2024-07-11 02:35:21.789546] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:31.515 02:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2032323 00:29:31.515 [2024-07-11 02:35:21.832383] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:31.775 00:29:31.775 real 0m18.522s 00:29:31.775 user 0m28.522s 00:29:31.775 sys 0m3.362s 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.775 ************************************ 00:29:31.775 END TEST raid_rebuild_test_io 00:29:31.775 ************************************ 00:29:31.775 02:35:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:31.775 02:35:22 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:29:31.775 02:35:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:31.775 02:35:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.775 02:35:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:31.775 ************************************ 00:29:31.775 START TEST raid_rebuild_test_sb_io 00:29:31.775 ************************************ 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2034915 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2034915 /var/tmp/spdk-raid.sock 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2034915 ']' 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:31.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.775 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.034 [2024-07-11 02:35:22.200113] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:32.034 [2024-07-11 02:35:22.200166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034915 ] 00:29:32.034 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:32.034 Zero copy mechanism will not be used. 00:29:32.034 [2024-07-11 02:35:22.322321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.034 [2024-07-11 02:35:22.374243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.034 [2024-07-11 02:35:22.436145] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:32.034 [2024-07-11 02:35:22.436186] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:32.294 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.294 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:29:32.294 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:32.294 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:32.294 BaseBdev1_malloc 00:29:32.294 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:32.553 [2024-07-11 02:35:22.831115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:32.553 [2024-07-11 02:35:22.831161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.553 [2024-07-11 02:35:22.831184] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27dcee0 00:29:32.553 [2024-07-11 02:35:22.831198] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.553 [2024-07-11 02:35:22.832774] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.553 [2024-07-11 02:35:22.832808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:32.553 BaseBdev1 00:29:32.553 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:32.553 02:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:32.812 BaseBdev2_malloc 00:29:32.812 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:33.071 [2024-07-11 02:35:23.256996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:33.071 [2024-07-11 02:35:23.257044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.071 [2024-07-11 02:35:23.257063] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27de870 00:29:33.071 [2024-07-11 02:35:23.257076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.071 [2024-07-11 02:35:23.258431] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.071 [2024-07-11 02:35:23.258460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:33.071 BaseBdev2 00:29:33.071 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:33.071 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:33.071 BaseBdev3_malloc 00:29:33.071 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:33.345 [2024-07-11 02:35:23.602434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:33.345 [2024-07-11 02:35:23.602478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.345 [2024-07-11 02:35:23.602497] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27d5b20 00:29:33.345 [2024-07-11 02:35:23.602509] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.345 [2024-07-11 02:35:23.603859] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.345 [2024-07-11 02:35:23.603886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:33.345 BaseBdev3 00:29:33.345 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:33.345 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:33.606 BaseBdev4_malloc 00:29:33.606 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:33.606 [2024-07-11 02:35:23.939973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:33.606 [2024-07-11 02:35:23.940018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.606 [2024-07-11 02:35:23.940039] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27d98d0 00:29:33.606 [2024-07-11 02:35:23.940051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.606 [2024-07-11 02:35:23.941384] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.606 [2024-07-11 02:35:23.941411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:33.606 BaseBdev4 00:29:33.606 02:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:33.864 spare_malloc 00:29:33.864 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:34.123 spare_delay 00:29:34.123 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:34.382 [2024-07-11 02:35:24.610230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:34.382 [2024-07-11 02:35:24.610274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.382 [2024-07-11 02:35:24.610296] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x262b8a0 00:29:34.382 [2024-07-11 02:35:24.610309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.382 [2024-07-11 02:35:24.611708] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.382 [2024-07-11 02:35:24.611736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:34.382 spare 00:29:34.382 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:34.382 [2024-07-11 02:35:24.790732] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:34.382 [2024-07-11 02:35:24.791953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:34.382 [2024-07-11 02:35:24.792006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:34.382 [2024-07-11 02:35:24.792052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:34.382 [2024-07-11 02:35:24.792242] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x262c300 00:29:34.382 [2024-07-11 02:35:24.792253] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:34.382 [2024-07-11 02:35:24.792433] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27cc2d0 00:29:34.382 [2024-07-11 02:35:24.792576] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x262c300 00:29:34.382 [2024-07-11 02:35:24.792586] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x262c300 00:29:34.382 [2024-07-11 02:35:24.792675] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:34.642 "name": "raid_bdev1", 00:29:34.642 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:34.642 "strip_size_kb": 0, 00:29:34.642 "state": "online", 00:29:34.642 "raid_level": "raid1", 00:29:34.642 "superblock": true, 00:29:34.642 "num_base_bdevs": 4, 00:29:34.642 "num_base_bdevs_discovered": 4, 00:29:34.642 "num_base_bdevs_operational": 4, 00:29:34.642 "base_bdevs_list": [ 00:29:34.642 { 00:29:34.642 "name": "BaseBdev1", 00:29:34.642 "uuid": "7be8ae2b-4c08-587d-86f6-4c6d34271430", 00:29:34.642 "is_configured": true, 00:29:34.642 "data_offset": 2048, 00:29:34.642 "data_size": 63488 00:29:34.642 }, 00:29:34.642 { 00:29:34.642 "name": "BaseBdev2", 00:29:34.642 "uuid": "0064926d-1f66-5f47-9635-7f77fb9795a4", 00:29:34.642 "is_configured": true, 00:29:34.642 "data_offset": 2048, 00:29:34.642 "data_size": 63488 00:29:34.642 }, 00:29:34.642 { 00:29:34.642 "name": "BaseBdev3", 00:29:34.642 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:34.642 "is_configured": true, 00:29:34.642 "data_offset": 2048, 00:29:34.642 "data_size": 63488 00:29:34.642 }, 00:29:34.642 { 00:29:34.642 "name": "BaseBdev4", 00:29:34.642 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:34.642 "is_configured": true, 00:29:34.642 "data_offset": 2048, 00:29:34.642 "data_size": 63488 00:29:34.642 } 00:29:34.642 ] 00:29:34.642 }' 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:34.642 02:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.210 02:35:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:35.210 02:35:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:35.469 [2024-07-11 02:35:25.753575] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:35.469 02:35:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:35.469 02:35:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.469 02:35:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:35.728 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:35.728 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:35.728 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:35.728 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:35.728 [2024-07-11 02:35:26.136356] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27cc140 00:29:35.728 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:35.728 Zero copy mechanism will not be used. 00:29:35.728 Running I/O for 60 seconds... 00:29:35.987 [2024-07-11 02:35:26.181111] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:35.987 [2024-07-11 02:35:26.197630] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x27cc140 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:35.987 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:35.988 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:35.988 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:35.988 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.988 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.247 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:36.247 "name": "raid_bdev1", 00:29:36.247 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:36.247 "strip_size_kb": 0, 00:29:36.247 "state": "online", 00:29:36.247 "raid_level": "raid1", 00:29:36.247 "superblock": true, 00:29:36.247 "num_base_bdevs": 4, 00:29:36.247 "num_base_bdevs_discovered": 3, 00:29:36.247 "num_base_bdevs_operational": 3, 00:29:36.247 "base_bdevs_list": [ 00:29:36.247 { 00:29:36.247 "name": null, 00:29:36.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:36.247 "is_configured": false, 00:29:36.247 "data_offset": 2048, 00:29:36.247 "data_size": 63488 00:29:36.247 }, 00:29:36.247 { 00:29:36.247 "name": "BaseBdev2", 00:29:36.247 "uuid": "0064926d-1f66-5f47-9635-7f77fb9795a4", 00:29:36.247 "is_configured": true, 00:29:36.247 "data_offset": 2048, 00:29:36.247 "data_size": 63488 00:29:36.247 }, 00:29:36.247 { 00:29:36.247 "name": "BaseBdev3", 00:29:36.247 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:36.247 "is_configured": true, 00:29:36.247 "data_offset": 2048, 00:29:36.247 "data_size": 63488 00:29:36.247 }, 00:29:36.247 { 00:29:36.247 "name": "BaseBdev4", 00:29:36.247 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:36.247 "is_configured": true, 00:29:36.247 "data_offset": 2048, 00:29:36.247 "data_size": 63488 00:29:36.247 } 00:29:36.247 ] 00:29:36.247 }' 00:29:36.247 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:36.247 02:35:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:36.815 02:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:37.074 [2024-07-11 02:35:27.336343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:37.074 02:35:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:37.074 [2024-07-11 02:35:27.402174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x262a8a0 00:29:37.074 [2024-07-11 02:35:27.404540] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:37.333 [2024-07-11 02:35:27.514772] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:37.334 [2024-07-11 02:35:27.516049] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:37.593 [2024-07-11 02:35:27.758597] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:37.593 [2024-07-11 02:35:27.758902] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:37.852 [2024-07-11 02:35:28.068049] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:37.852 [2024-07-11 02:35:28.069242] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:38.111 [2024-07-11 02:35:28.292033] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:38.111 [2024-07-11 02:35:28.292207] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:38.111 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:38.111 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:38.111 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:38.111 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:38.111 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:38.111 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.111 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.371 [2024-07-11 02:35:28.536291] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:38.371 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:38.371 "name": "raid_bdev1", 00:29:38.371 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:38.371 "strip_size_kb": 0, 00:29:38.371 "state": "online", 00:29:38.371 "raid_level": "raid1", 00:29:38.371 "superblock": true, 00:29:38.371 "num_base_bdevs": 4, 00:29:38.371 "num_base_bdevs_discovered": 4, 00:29:38.371 "num_base_bdevs_operational": 4, 00:29:38.371 "process": { 00:29:38.371 "type": "rebuild", 00:29:38.371 "target": "spare", 00:29:38.371 "progress": { 00:29:38.371 "blocks": 14336, 00:29:38.371 "percent": 22 00:29:38.371 } 00:29:38.371 }, 00:29:38.371 "base_bdevs_list": [ 00:29:38.371 { 00:29:38.371 "name": "spare", 00:29:38.371 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:38.371 "is_configured": true, 00:29:38.371 "data_offset": 2048, 00:29:38.371 "data_size": 63488 00:29:38.371 }, 00:29:38.371 { 00:29:38.371 "name": "BaseBdev2", 00:29:38.371 "uuid": "0064926d-1f66-5f47-9635-7f77fb9795a4", 00:29:38.371 "is_configured": true, 00:29:38.371 "data_offset": 2048, 00:29:38.371 "data_size": 63488 00:29:38.371 }, 00:29:38.371 { 00:29:38.371 "name": "BaseBdev3", 00:29:38.371 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:38.371 "is_configured": true, 00:29:38.371 "data_offset": 2048, 00:29:38.371 "data_size": 63488 00:29:38.371 }, 00:29:38.371 { 00:29:38.371 "name": "BaseBdev4", 00:29:38.371 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:38.371 "is_configured": true, 00:29:38.371 "data_offset": 2048, 00:29:38.371 "data_size": 63488 00:29:38.371 } 00:29:38.371 ] 00:29:38.371 }' 00:29:38.371 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:38.371 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:38.371 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:38.371 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:38.371 02:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:38.630 [2024-07-11 02:35:28.970257] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:38.890 [2024-07-11 02:35:29.136932] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:38.890 [2024-07-11 02:35:29.141538] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:38.890 [2024-07-11 02:35:29.141573] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:38.890 [2024-07-11 02:35:29.141585] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:38.890 [2024-07-11 02:35:29.165062] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x27cc140 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.890 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.148 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:39.148 "name": "raid_bdev1", 00:29:39.148 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:39.148 "strip_size_kb": 0, 00:29:39.148 "state": "online", 00:29:39.148 "raid_level": "raid1", 00:29:39.148 "superblock": true, 00:29:39.148 "num_base_bdevs": 4, 00:29:39.148 "num_base_bdevs_discovered": 3, 00:29:39.148 "num_base_bdevs_operational": 3, 00:29:39.148 "base_bdevs_list": [ 00:29:39.148 { 00:29:39.148 "name": null, 00:29:39.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:39.148 "is_configured": false, 00:29:39.149 "data_offset": 2048, 00:29:39.149 "data_size": 63488 00:29:39.149 }, 00:29:39.149 { 00:29:39.149 "name": "BaseBdev2", 00:29:39.149 "uuid": "0064926d-1f66-5f47-9635-7f77fb9795a4", 00:29:39.149 "is_configured": true, 00:29:39.149 "data_offset": 2048, 00:29:39.149 "data_size": 63488 00:29:39.149 }, 00:29:39.149 { 00:29:39.149 "name": "BaseBdev3", 00:29:39.149 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:39.149 "is_configured": true, 00:29:39.149 "data_offset": 2048, 00:29:39.149 "data_size": 63488 00:29:39.149 }, 00:29:39.149 { 00:29:39.149 "name": "BaseBdev4", 00:29:39.149 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:39.149 "is_configured": true, 00:29:39.149 "data_offset": 2048, 00:29:39.149 "data_size": 63488 00:29:39.149 } 00:29:39.149 ] 00:29:39.149 }' 00:29:39.149 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:39.149 02:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.082 "name": "raid_bdev1", 00:29:40.082 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:40.082 "strip_size_kb": 0, 00:29:40.082 "state": "online", 00:29:40.082 "raid_level": "raid1", 00:29:40.082 "superblock": true, 00:29:40.082 "num_base_bdevs": 4, 00:29:40.082 "num_base_bdevs_discovered": 3, 00:29:40.082 "num_base_bdevs_operational": 3, 00:29:40.082 "base_bdevs_list": [ 00:29:40.082 { 00:29:40.082 "name": null, 00:29:40.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.082 "is_configured": false, 00:29:40.082 "data_offset": 2048, 00:29:40.082 "data_size": 63488 00:29:40.082 }, 00:29:40.082 { 00:29:40.082 "name": "BaseBdev2", 00:29:40.082 "uuid": "0064926d-1f66-5f47-9635-7f77fb9795a4", 00:29:40.082 "is_configured": true, 00:29:40.082 "data_offset": 2048, 00:29:40.082 "data_size": 63488 00:29:40.082 }, 00:29:40.082 { 00:29:40.082 "name": "BaseBdev3", 00:29:40.082 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:40.082 "is_configured": true, 00:29:40.082 "data_offset": 2048, 00:29:40.082 "data_size": 63488 00:29:40.082 }, 00:29:40.082 { 00:29:40.082 "name": "BaseBdev4", 00:29:40.082 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:40.082 "is_configured": true, 00:29:40.082 "data_offset": 2048, 00:29:40.082 "data_size": 63488 00:29:40.082 } 00:29:40.082 ] 00:29:40.082 }' 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:40.082 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:40.340 [2024-07-11 02:35:30.728804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:40.599 02:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:40.599 [2024-07-11 02:35:30.784550] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27ce5a0 00:29:40.599 [2024-07-11 02:35:30.786302] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:40.857 [2024-07-11 02:35:31.076194] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:40.857 [2024-07-11 02:35:31.076491] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:41.116 [2024-07-11 02:35:31.421396] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:41.116 [2024-07-11 02:35:31.421753] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:41.375 [2024-07-11 02:35:31.655104] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:41.375 [2024-07-11 02:35:31.655391] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:41.375 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.375 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.375 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:41.376 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:41.376 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.376 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.376 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.635 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.635 "name": "raid_bdev1", 00:29:41.635 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:41.635 "strip_size_kb": 0, 00:29:41.635 "state": "online", 00:29:41.635 "raid_level": "raid1", 00:29:41.635 "superblock": true, 00:29:41.635 "num_base_bdevs": 4, 00:29:41.635 "num_base_bdevs_discovered": 4, 00:29:41.635 "num_base_bdevs_operational": 4, 00:29:41.635 "process": { 00:29:41.635 "type": "rebuild", 00:29:41.635 "target": "spare", 00:29:41.635 "progress": { 00:29:41.635 "blocks": 12288, 00:29:41.635 "percent": 19 00:29:41.635 } 00:29:41.635 }, 00:29:41.635 "base_bdevs_list": [ 00:29:41.635 { 00:29:41.635 "name": "spare", 00:29:41.635 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:41.635 "is_configured": true, 00:29:41.635 "data_offset": 2048, 00:29:41.635 "data_size": 63488 00:29:41.635 }, 00:29:41.635 { 00:29:41.635 "name": "BaseBdev2", 00:29:41.635 "uuid": "0064926d-1f66-5f47-9635-7f77fb9795a4", 00:29:41.635 "is_configured": true, 00:29:41.635 "data_offset": 2048, 00:29:41.635 "data_size": 63488 00:29:41.635 }, 00:29:41.635 { 00:29:41.635 "name": "BaseBdev3", 00:29:41.635 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:41.635 "is_configured": true, 00:29:41.635 "data_offset": 2048, 00:29:41.635 "data_size": 63488 00:29:41.635 }, 00:29:41.635 { 00:29:41.635 "name": "BaseBdev4", 00:29:41.635 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:41.635 "is_configured": true, 00:29:41.635 "data_offset": 2048, 00:29:41.635 "data_size": 63488 00:29:41.635 } 00:29:41.635 ] 00:29:41.635 }' 00:29:41.635 02:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.635 [2024-07-11 02:35:32.003936] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:41.635 [2024-07-11 02:35:32.005094] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:41.635 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:41.635 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.908 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:41.908 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:41.908 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:41.908 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:41.909 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:41.909 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:41.909 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:41.909 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:41.909 [2024-07-11 02:35:32.229044] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:41.909 [2024-07-11 02:35:32.229659] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:41.909 [2024-07-11 02:35:32.294811] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:42.527 [2024-07-11 02:35:32.654810] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x27cc140 00:29:42.527 [2024-07-11 02:35:32.654847] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x27ce5a0 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.527 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.527 [2024-07-11 02:35:32.796538] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:42.527 [2024-07-11 02:35:32.797325] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:42.785 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.786 "name": "raid_bdev1", 00:29:42.786 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:42.786 "strip_size_kb": 0, 00:29:42.786 "state": "online", 00:29:42.786 "raid_level": "raid1", 00:29:42.786 "superblock": true, 00:29:42.786 "num_base_bdevs": 4, 00:29:42.786 "num_base_bdevs_discovered": 3, 00:29:42.786 "num_base_bdevs_operational": 3, 00:29:42.786 "process": { 00:29:42.786 "type": "rebuild", 00:29:42.786 "target": "spare", 00:29:42.786 "progress": { 00:29:42.786 "blocks": 20480, 00:29:42.786 "percent": 32 00:29:42.786 } 00:29:42.786 }, 00:29:42.786 "base_bdevs_list": [ 00:29:42.786 { 00:29:42.786 "name": "spare", 00:29:42.786 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:42.786 "is_configured": true, 00:29:42.786 "data_offset": 2048, 00:29:42.786 "data_size": 63488 00:29:42.786 }, 00:29:42.786 { 00:29:42.786 "name": null, 00:29:42.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.786 "is_configured": false, 00:29:42.786 "data_offset": 2048, 00:29:42.786 "data_size": 63488 00:29:42.786 }, 00:29:42.786 { 00:29:42.786 "name": "BaseBdev3", 00:29:42.786 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:42.786 "is_configured": true, 00:29:42.786 "data_offset": 2048, 00:29:42.786 "data_size": 63488 00:29:42.786 }, 00:29:42.786 { 00:29:42.786 "name": "BaseBdev4", 00:29:42.786 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:42.786 "is_configured": true, 00:29:42.786 "data_offset": 2048, 00:29:42.786 "data_size": 63488 00:29:42.786 } 00:29:42.786 ] 00:29:42.786 }' 00:29:42.786 02:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:42.786 [2024-07-11 02:35:33.018582] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=986 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.786 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.045 [2024-07-11 02:35:33.252347] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:43.045 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.045 "name": "raid_bdev1", 00:29:43.045 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:43.045 "strip_size_kb": 0, 00:29:43.045 "state": "online", 00:29:43.045 "raid_level": "raid1", 00:29:43.045 "superblock": true, 00:29:43.045 "num_base_bdevs": 4, 00:29:43.045 "num_base_bdevs_discovered": 3, 00:29:43.045 "num_base_bdevs_operational": 3, 00:29:43.045 "process": { 00:29:43.045 "type": "rebuild", 00:29:43.045 "target": "spare", 00:29:43.045 "progress": { 00:29:43.045 "blocks": 26624, 00:29:43.045 "percent": 41 00:29:43.045 } 00:29:43.045 }, 00:29:43.045 "base_bdevs_list": [ 00:29:43.045 { 00:29:43.045 "name": "spare", 00:29:43.045 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:43.045 "is_configured": true, 00:29:43.045 "data_offset": 2048, 00:29:43.045 "data_size": 63488 00:29:43.045 }, 00:29:43.045 { 00:29:43.045 "name": null, 00:29:43.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.045 "is_configured": false, 00:29:43.045 "data_offset": 2048, 00:29:43.045 "data_size": 63488 00:29:43.045 }, 00:29:43.045 { 00:29:43.045 "name": "BaseBdev3", 00:29:43.045 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:43.045 "is_configured": true, 00:29:43.045 "data_offset": 2048, 00:29:43.045 "data_size": 63488 00:29:43.045 }, 00:29:43.045 { 00:29:43.045 "name": "BaseBdev4", 00:29:43.045 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:43.045 "is_configured": true, 00:29:43.045 "data_offset": 2048, 00:29:43.045 "data_size": 63488 00:29:43.045 } 00:29:43.045 ] 00:29:43.045 }' 00:29:43.045 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.045 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:43.045 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.045 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:43.045 02:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.993 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.251 [2024-07-11 02:35:34.424106] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:29:44.251 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.251 "name": "raid_bdev1", 00:29:44.251 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:44.251 "strip_size_kb": 0, 00:29:44.251 "state": "online", 00:29:44.251 "raid_level": "raid1", 00:29:44.251 "superblock": true, 00:29:44.251 "num_base_bdevs": 4, 00:29:44.251 "num_base_bdevs_discovered": 3, 00:29:44.251 "num_base_bdevs_operational": 3, 00:29:44.251 "process": { 00:29:44.251 "type": "rebuild", 00:29:44.251 "target": "spare", 00:29:44.251 "progress": { 00:29:44.251 "blocks": 45056, 00:29:44.251 "percent": 70 00:29:44.251 } 00:29:44.251 }, 00:29:44.251 "base_bdevs_list": [ 00:29:44.251 { 00:29:44.251 "name": "spare", 00:29:44.251 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:44.251 "is_configured": true, 00:29:44.251 "data_offset": 2048, 00:29:44.251 "data_size": 63488 00:29:44.251 }, 00:29:44.251 { 00:29:44.252 "name": null, 00:29:44.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.252 "is_configured": false, 00:29:44.252 "data_offset": 2048, 00:29:44.252 "data_size": 63488 00:29:44.252 }, 00:29:44.252 { 00:29:44.252 "name": "BaseBdev3", 00:29:44.252 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:44.252 "is_configured": true, 00:29:44.252 "data_offset": 2048, 00:29:44.252 "data_size": 63488 00:29:44.252 }, 00:29:44.252 { 00:29:44.252 "name": "BaseBdev4", 00:29:44.252 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:44.252 "is_configured": true, 00:29:44.252 "data_offset": 2048, 00:29:44.252 "data_size": 63488 00:29:44.252 } 00:29:44.252 ] 00:29:44.252 }' 00:29:44.252 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.510 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.510 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.510 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.510 02:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:44.510 [2024-07-11 02:35:34.866489] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:29:45.077 [2024-07-11 02:35:35.314043] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:29:45.336 [2024-07-11 02:35:35.674304] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.336 02:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.594 [2024-07-11 02:35:35.774516] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:45.594 [2024-07-11 02:35:35.776954] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:45.853 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.853 "name": "raid_bdev1", 00:29:45.853 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:45.853 "strip_size_kb": 0, 00:29:45.853 "state": "online", 00:29:45.853 "raid_level": "raid1", 00:29:45.853 "superblock": true, 00:29:45.853 "num_base_bdevs": 4, 00:29:45.854 "num_base_bdevs_discovered": 3, 00:29:45.854 "num_base_bdevs_operational": 3, 00:29:45.854 "base_bdevs_list": [ 00:29:45.854 { 00:29:45.854 "name": "spare", 00:29:45.854 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:45.854 "is_configured": true, 00:29:45.854 "data_offset": 2048, 00:29:45.854 "data_size": 63488 00:29:45.854 }, 00:29:45.854 { 00:29:45.854 "name": null, 00:29:45.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.854 "is_configured": false, 00:29:45.854 "data_offset": 2048, 00:29:45.854 "data_size": 63488 00:29:45.854 }, 00:29:45.854 { 00:29:45.854 "name": "BaseBdev3", 00:29:45.854 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:45.854 "is_configured": true, 00:29:45.854 "data_offset": 2048, 00:29:45.854 "data_size": 63488 00:29:45.854 }, 00:29:45.854 { 00:29:45.854 "name": "BaseBdev4", 00:29:45.854 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:45.854 "is_configured": true, 00:29:45.854 "data_offset": 2048, 00:29:45.854 "data_size": 63488 00:29:45.854 } 00:29:45.854 ] 00:29:45.854 }' 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.854 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.113 "name": "raid_bdev1", 00:29:46.113 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:46.113 "strip_size_kb": 0, 00:29:46.113 "state": "online", 00:29:46.113 "raid_level": "raid1", 00:29:46.113 "superblock": true, 00:29:46.113 "num_base_bdevs": 4, 00:29:46.113 "num_base_bdevs_discovered": 3, 00:29:46.113 "num_base_bdevs_operational": 3, 00:29:46.113 "base_bdevs_list": [ 00:29:46.113 { 00:29:46.113 "name": "spare", 00:29:46.113 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:46.113 "is_configured": true, 00:29:46.113 "data_offset": 2048, 00:29:46.113 "data_size": 63488 00:29:46.113 }, 00:29:46.113 { 00:29:46.113 "name": null, 00:29:46.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.113 "is_configured": false, 00:29:46.113 "data_offset": 2048, 00:29:46.113 "data_size": 63488 00:29:46.113 }, 00:29:46.113 { 00:29:46.113 "name": "BaseBdev3", 00:29:46.113 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:46.113 "is_configured": true, 00:29:46.113 "data_offset": 2048, 00:29:46.113 "data_size": 63488 00:29:46.113 }, 00:29:46.113 { 00:29:46.113 "name": "BaseBdev4", 00:29:46.113 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:46.113 "is_configured": true, 00:29:46.113 "data_offset": 2048, 00:29:46.113 "data_size": 63488 00:29:46.113 } 00:29:46.113 ] 00:29:46.113 }' 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.113 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.372 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:46.372 "name": "raid_bdev1", 00:29:46.372 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:46.372 "strip_size_kb": 0, 00:29:46.372 "state": "online", 00:29:46.372 "raid_level": "raid1", 00:29:46.372 "superblock": true, 00:29:46.372 "num_base_bdevs": 4, 00:29:46.372 "num_base_bdevs_discovered": 3, 00:29:46.372 "num_base_bdevs_operational": 3, 00:29:46.372 "base_bdevs_list": [ 00:29:46.372 { 00:29:46.372 "name": "spare", 00:29:46.372 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:46.372 "is_configured": true, 00:29:46.372 "data_offset": 2048, 00:29:46.372 "data_size": 63488 00:29:46.372 }, 00:29:46.372 { 00:29:46.372 "name": null, 00:29:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.372 "is_configured": false, 00:29:46.372 "data_offset": 2048, 00:29:46.372 "data_size": 63488 00:29:46.372 }, 00:29:46.372 { 00:29:46.372 "name": "BaseBdev3", 00:29:46.372 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:46.372 "is_configured": true, 00:29:46.372 "data_offset": 2048, 00:29:46.372 "data_size": 63488 00:29:46.372 }, 00:29:46.372 { 00:29:46.372 "name": "BaseBdev4", 00:29:46.372 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:46.372 "is_configured": true, 00:29:46.372 "data_offset": 2048, 00:29:46.372 "data_size": 63488 00:29:46.372 } 00:29:46.372 ] 00:29:46.372 }' 00:29:46.372 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:46.372 02:35:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.941 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:47.200 [2024-07-11 02:35:37.442751] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:47.200 [2024-07-11 02:35:37.442792] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:47.200 00:29:47.200 Latency(us) 00:29:47.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.200 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:47.200 raid_bdev1 : 11.38 91.33 273.99 0.00 0.00 14899.45 295.62 112607.94 00:29:47.200 =================================================================================================================== 00:29:47.200 Total : 91.33 273.99 0.00 0.00 14899.45 295.62 112607.94 00:29:47.200 [2024-07-11 02:35:37.546962] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.200 [2024-07-11 02:35:37.546996] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:47.200 [2024-07-11 02:35:37.547092] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:47.200 [2024-07-11 02:35:37.547105] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x262c300 name raid_bdev1, state offline 00:29:47.200 0 00:29:47.200 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.200 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:47.458 02:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:47.717 /dev/nbd0 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.717 1+0 records in 00:29:47.717 1+0 records out 00:29:47.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285979 s, 14.3 MB/s 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:47.717 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:47.976 /dev/nbd1 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.976 1+0 records in 00:29:47.976 1+0 records out 00:29:47.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288743 s, 14.2 MB/s 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:47.976 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:48.235 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:48.497 /dev/nbd1 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.497 1+0 records in 00:29:48.497 1+0 records out 00:29:48.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290541 s, 14.1 MB/s 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:48.497 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:48.756 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:48.756 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:48.756 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:48.756 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:48.756 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:48.756 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:48.756 02:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:49.015 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:49.274 02:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:49.842 [2024-07-11 02:35:40.184314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:49.842 [2024-07-11 02:35:40.184369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:49.842 [2024-07-11 02:35:40.184392] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27dbb90 00:29:49.842 [2024-07-11 02:35:40.184405] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:49.842 [2024-07-11 02:35:40.186009] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:49.842 [2024-07-11 02:35:40.186041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:49.842 [2024-07-11 02:35:40.186124] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:49.842 [2024-07-11 02:35:40.186153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:49.842 [2024-07-11 02:35:40.186255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:49.843 [2024-07-11 02:35:40.186327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:49.843 spare 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.843 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.103 [2024-07-11 02:35:40.286647] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26c1f50 00:29:50.103 [2024-07-11 02:35:40.286668] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:50.103 [2024-07-11 02:35:40.286879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27d6a80 00:29:50.103 [2024-07-11 02:35:40.287043] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26c1f50 00:29:50.103 [2024-07-11 02:35:40.287053] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26c1f50 00:29:50.103 [2024-07-11 02:35:40.287170] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.103 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:50.103 "name": "raid_bdev1", 00:29:50.103 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:50.103 "strip_size_kb": 0, 00:29:50.103 "state": "online", 00:29:50.103 "raid_level": "raid1", 00:29:50.103 "superblock": true, 00:29:50.103 "num_base_bdevs": 4, 00:29:50.103 "num_base_bdevs_discovered": 3, 00:29:50.103 "num_base_bdevs_operational": 3, 00:29:50.103 "base_bdevs_list": [ 00:29:50.103 { 00:29:50.103 "name": "spare", 00:29:50.103 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:50.103 "is_configured": true, 00:29:50.103 "data_offset": 2048, 00:29:50.103 "data_size": 63488 00:29:50.103 }, 00:29:50.103 { 00:29:50.103 "name": null, 00:29:50.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.103 "is_configured": false, 00:29:50.103 "data_offset": 2048, 00:29:50.103 "data_size": 63488 00:29:50.103 }, 00:29:50.103 { 00:29:50.103 "name": "BaseBdev3", 00:29:50.103 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:50.103 "is_configured": true, 00:29:50.103 "data_offset": 2048, 00:29:50.103 "data_size": 63488 00:29:50.103 }, 00:29:50.103 { 00:29:50.103 "name": "BaseBdev4", 00:29:50.103 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:50.103 "is_configured": true, 00:29:50.103 "data_offset": 2048, 00:29:50.103 "data_size": 63488 00:29:50.103 } 00:29:50.103 ] 00:29:50.103 }' 00:29:50.103 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:50.103 02:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.042 "name": "raid_bdev1", 00:29:51.042 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:51.042 "strip_size_kb": 0, 00:29:51.042 "state": "online", 00:29:51.042 "raid_level": "raid1", 00:29:51.042 "superblock": true, 00:29:51.042 "num_base_bdevs": 4, 00:29:51.042 "num_base_bdevs_discovered": 3, 00:29:51.042 "num_base_bdevs_operational": 3, 00:29:51.042 "base_bdevs_list": [ 00:29:51.042 { 00:29:51.042 "name": "spare", 00:29:51.042 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:51.042 "is_configured": true, 00:29:51.042 "data_offset": 2048, 00:29:51.042 "data_size": 63488 00:29:51.042 }, 00:29:51.042 { 00:29:51.042 "name": null, 00:29:51.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.042 "is_configured": false, 00:29:51.042 "data_offset": 2048, 00:29:51.042 "data_size": 63488 00:29:51.042 }, 00:29:51.042 { 00:29:51.042 "name": "BaseBdev3", 00:29:51.042 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:51.042 "is_configured": true, 00:29:51.042 "data_offset": 2048, 00:29:51.042 "data_size": 63488 00:29:51.042 }, 00:29:51.042 { 00:29:51.042 "name": "BaseBdev4", 00:29:51.042 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:51.042 "is_configured": true, 00:29:51.042 "data_offset": 2048, 00:29:51.042 "data_size": 63488 00:29:51.042 } 00:29:51.042 ] 00:29:51.042 }' 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:51.042 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:51.303 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:51.303 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.303 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:51.563 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:51.563 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:51.563 [2024-07-11 02:35:41.969612] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.823 02:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.085 02:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:52.085 "name": "raid_bdev1", 00:29:52.085 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:52.085 "strip_size_kb": 0, 00:29:52.085 "state": "online", 00:29:52.085 "raid_level": "raid1", 00:29:52.085 "superblock": true, 00:29:52.085 "num_base_bdevs": 4, 00:29:52.085 "num_base_bdevs_discovered": 2, 00:29:52.085 "num_base_bdevs_operational": 2, 00:29:52.085 "base_bdevs_list": [ 00:29:52.085 { 00:29:52.085 "name": null, 00:29:52.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.085 "is_configured": false, 00:29:52.085 "data_offset": 2048, 00:29:52.085 "data_size": 63488 00:29:52.085 }, 00:29:52.085 { 00:29:52.085 "name": null, 00:29:52.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.085 "is_configured": false, 00:29:52.085 "data_offset": 2048, 00:29:52.085 "data_size": 63488 00:29:52.085 }, 00:29:52.085 { 00:29:52.085 "name": "BaseBdev3", 00:29:52.085 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:52.085 "is_configured": true, 00:29:52.085 "data_offset": 2048, 00:29:52.085 "data_size": 63488 00:29:52.085 }, 00:29:52.085 { 00:29:52.085 "name": "BaseBdev4", 00:29:52.085 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:52.085 "is_configured": true, 00:29:52.085 "data_offset": 2048, 00:29:52.085 "data_size": 63488 00:29:52.085 } 00:29:52.085 ] 00:29:52.085 }' 00:29:52.085 02:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:52.085 02:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:53.025 02:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:53.025 [2024-07-11 02:35:43.313551] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:53.025 [2024-07-11 02:35:43.313700] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:53.025 [2024-07-11 02:35:43.313723] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:53.025 [2024-07-11 02:35:43.313752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:53.025 [2024-07-11 02:35:43.318011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27d6a80 00:29:53.025 [2024-07-11 02:35:43.320249] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:53.025 02:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:53.962 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:53.962 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.962 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:53.962 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:53.962 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.962 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.962 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.222 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:54.222 "name": "raid_bdev1", 00:29:54.222 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:54.222 "strip_size_kb": 0, 00:29:54.222 "state": "online", 00:29:54.222 "raid_level": "raid1", 00:29:54.222 "superblock": true, 00:29:54.222 "num_base_bdevs": 4, 00:29:54.222 "num_base_bdevs_discovered": 3, 00:29:54.222 "num_base_bdevs_operational": 3, 00:29:54.222 "process": { 00:29:54.222 "type": "rebuild", 00:29:54.222 "target": "spare", 00:29:54.222 "progress": { 00:29:54.222 "blocks": 24576, 00:29:54.222 "percent": 38 00:29:54.222 } 00:29:54.222 }, 00:29:54.222 "base_bdevs_list": [ 00:29:54.222 { 00:29:54.222 "name": "spare", 00:29:54.222 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:54.222 "is_configured": true, 00:29:54.222 "data_offset": 2048, 00:29:54.222 "data_size": 63488 00:29:54.222 }, 00:29:54.222 { 00:29:54.222 "name": null, 00:29:54.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.222 "is_configured": false, 00:29:54.222 "data_offset": 2048, 00:29:54.222 "data_size": 63488 00:29:54.222 }, 00:29:54.222 { 00:29:54.222 "name": "BaseBdev3", 00:29:54.222 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:54.222 "is_configured": true, 00:29:54.222 "data_offset": 2048, 00:29:54.222 "data_size": 63488 00:29:54.222 }, 00:29:54.222 { 00:29:54.222 "name": "BaseBdev4", 00:29:54.222 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:54.222 "is_configured": true, 00:29:54.222 "data_offset": 2048, 00:29:54.222 "data_size": 63488 00:29:54.222 } 00:29:54.222 ] 00:29:54.222 }' 00:29:54.222 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:54.481 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:54.481 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.481 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:54.481 02:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:54.741 [2024-07-11 02:35:44.972209] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:54.741 [2024-07-11 02:35:45.033307] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:54.741 [2024-07-11 02:35:45.033352] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.741 [2024-07-11 02:35:45.033368] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:54.741 [2024-07-11 02:35:45.033376] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.741 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.328 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:55.328 "name": "raid_bdev1", 00:29:55.328 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:55.328 "strip_size_kb": 0, 00:29:55.328 "state": "online", 00:29:55.328 "raid_level": "raid1", 00:29:55.328 "superblock": true, 00:29:55.328 "num_base_bdevs": 4, 00:29:55.328 "num_base_bdevs_discovered": 2, 00:29:55.328 "num_base_bdevs_operational": 2, 00:29:55.328 "base_bdevs_list": [ 00:29:55.328 { 00:29:55.328 "name": null, 00:29:55.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.328 "is_configured": false, 00:29:55.328 "data_offset": 2048, 00:29:55.328 "data_size": 63488 00:29:55.328 }, 00:29:55.328 { 00:29:55.328 "name": null, 00:29:55.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.328 "is_configured": false, 00:29:55.328 "data_offset": 2048, 00:29:55.328 "data_size": 63488 00:29:55.328 }, 00:29:55.328 { 00:29:55.328 "name": "BaseBdev3", 00:29:55.328 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:55.328 "is_configured": true, 00:29:55.328 "data_offset": 2048, 00:29:55.328 "data_size": 63488 00:29:55.328 }, 00:29:55.328 { 00:29:55.328 "name": "BaseBdev4", 00:29:55.328 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:55.328 "is_configured": true, 00:29:55.328 "data_offset": 2048, 00:29:55.328 "data_size": 63488 00:29:55.328 } 00:29:55.328 ] 00:29:55.328 }' 00:29:55.328 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:55.328 02:35:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:55.895 02:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:56.154 [2024-07-11 02:35:46.510258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:56.154 [2024-07-11 02:35:46.510314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:56.154 [2024-07-11 02:35:46.510336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27d2430 00:29:56.154 [2024-07-11 02:35:46.510348] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:56.154 [2024-07-11 02:35:46.510718] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:56.154 [2024-07-11 02:35:46.510736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:56.154 [2024-07-11 02:35:46.510829] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:56.154 [2024-07-11 02:35:46.510842] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:56.154 [2024-07-11 02:35:46.510852] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:56.154 [2024-07-11 02:35:46.510871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:56.154 [2024-07-11 02:35:46.515188] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27cd600 00:29:56.154 spare 00:29:56.154 [2024-07-11 02:35:46.516630] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:56.154 02:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.535 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:57.535 "name": "raid_bdev1", 00:29:57.535 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:57.535 "strip_size_kb": 0, 00:29:57.535 "state": "online", 00:29:57.535 "raid_level": "raid1", 00:29:57.535 "superblock": true, 00:29:57.535 "num_base_bdevs": 4, 00:29:57.535 "num_base_bdevs_discovered": 3, 00:29:57.535 "num_base_bdevs_operational": 3, 00:29:57.535 "process": { 00:29:57.535 "type": "rebuild", 00:29:57.535 "target": "spare", 00:29:57.535 "progress": { 00:29:57.535 "blocks": 22528, 00:29:57.535 "percent": 35 00:29:57.535 } 00:29:57.535 }, 00:29:57.535 "base_bdevs_list": [ 00:29:57.535 { 00:29:57.536 "name": "spare", 00:29:57.536 "uuid": "9a1d6c48-a80b-57cf-8b5f-840d723ccd48", 00:29:57.536 "is_configured": true, 00:29:57.536 "data_offset": 2048, 00:29:57.536 "data_size": 63488 00:29:57.536 }, 00:29:57.536 { 00:29:57.536 "name": null, 00:29:57.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.536 "is_configured": false, 00:29:57.536 "data_offset": 2048, 00:29:57.536 "data_size": 63488 00:29:57.536 }, 00:29:57.536 { 00:29:57.536 "name": "BaseBdev3", 00:29:57.536 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:57.536 "is_configured": true, 00:29:57.536 "data_offset": 2048, 00:29:57.536 "data_size": 63488 00:29:57.536 }, 00:29:57.536 { 00:29:57.536 "name": "BaseBdev4", 00:29:57.536 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:57.536 "is_configured": true, 00:29:57.536 "data_offset": 2048, 00:29:57.536 "data_size": 63488 00:29:57.536 } 00:29:57.536 ] 00:29:57.536 }' 00:29:57.536 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:57.536 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.536 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:57.536 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.536 02:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:57.796 [2024-07-11 02:35:48.068964] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:57.796 [2024-07-11 02:35:48.129074] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:57.796 [2024-07-11 02:35:48.129119] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:57.796 [2024-07-11 02:35:48.129136] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:57.796 [2024-07-11 02:35:48.129145] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.796 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.366 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:58.366 "name": "raid_bdev1", 00:29:58.366 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:58.366 "strip_size_kb": 0, 00:29:58.366 "state": "online", 00:29:58.366 "raid_level": "raid1", 00:29:58.366 "superblock": true, 00:29:58.366 "num_base_bdevs": 4, 00:29:58.366 "num_base_bdevs_discovered": 2, 00:29:58.366 "num_base_bdevs_operational": 2, 00:29:58.366 "base_bdevs_list": [ 00:29:58.366 { 00:29:58.366 "name": null, 00:29:58.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.366 "is_configured": false, 00:29:58.366 "data_offset": 2048, 00:29:58.366 "data_size": 63488 00:29:58.366 }, 00:29:58.366 { 00:29:58.366 "name": null, 00:29:58.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.366 "is_configured": false, 00:29:58.366 "data_offset": 2048, 00:29:58.366 "data_size": 63488 00:29:58.366 }, 00:29:58.366 { 00:29:58.366 "name": "BaseBdev3", 00:29:58.366 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:58.367 "is_configured": true, 00:29:58.367 "data_offset": 2048, 00:29:58.367 "data_size": 63488 00:29:58.367 }, 00:29:58.367 { 00:29:58.367 "name": "BaseBdev4", 00:29:58.367 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:58.367 "is_configured": true, 00:29:58.367 "data_offset": 2048, 00:29:58.367 "data_size": 63488 00:29:58.367 } 00:29:58.367 ] 00:29:58.367 }' 00:29:58.367 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:58.367 02:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:59.305 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:59.305 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:59.305 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:59.305 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:59.305 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:59.305 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.305 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.563 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.563 "name": "raid_bdev1", 00:29:59.563 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:29:59.563 "strip_size_kb": 0, 00:29:59.563 "state": "online", 00:29:59.563 "raid_level": "raid1", 00:29:59.563 "superblock": true, 00:29:59.563 "num_base_bdevs": 4, 00:29:59.563 "num_base_bdevs_discovered": 2, 00:29:59.563 "num_base_bdevs_operational": 2, 00:29:59.563 "base_bdevs_list": [ 00:29:59.563 { 00:29:59.563 "name": null, 00:29:59.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.563 "is_configured": false, 00:29:59.563 "data_offset": 2048, 00:29:59.563 "data_size": 63488 00:29:59.563 }, 00:29:59.563 { 00:29:59.563 "name": null, 00:29:59.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.563 "is_configured": false, 00:29:59.563 "data_offset": 2048, 00:29:59.563 "data_size": 63488 00:29:59.563 }, 00:29:59.563 { 00:29:59.563 "name": "BaseBdev3", 00:29:59.563 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:29:59.563 "is_configured": true, 00:29:59.563 "data_offset": 2048, 00:29:59.563 "data_size": 63488 00:29:59.563 }, 00:29:59.563 { 00:29:59.563 "name": "BaseBdev4", 00:29:59.563 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:29:59.563 "is_configured": true, 00:29:59.563 "data_offset": 2048, 00:29:59.563 "data_size": 63488 00:29:59.563 } 00:29:59.563 ] 00:29:59.563 }' 00:29:59.563 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:59.563 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:59.563 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:59.563 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:59.563 02:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:59.823 02:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:00.082 [2024-07-11 02:35:50.351660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:00.082 [2024-07-11 02:35:50.351713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.082 [2024-07-11 02:35:50.351734] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27cd400 00:30:00.082 [2024-07-11 02:35:50.351747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.082 [2024-07-11 02:35:50.352116] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.082 [2024-07-11 02:35:50.352139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:00.082 [2024-07-11 02:35:50.352209] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:00.082 [2024-07-11 02:35:50.352223] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:00.082 [2024-07-11 02:35:50.352233] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:00.082 BaseBdev1 00:30:00.082 02:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.020 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.279 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:01.279 "name": "raid_bdev1", 00:30:01.279 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:30:01.279 "strip_size_kb": 0, 00:30:01.279 "state": "online", 00:30:01.279 "raid_level": "raid1", 00:30:01.279 "superblock": true, 00:30:01.279 "num_base_bdevs": 4, 00:30:01.279 "num_base_bdevs_discovered": 2, 00:30:01.279 "num_base_bdevs_operational": 2, 00:30:01.279 "base_bdevs_list": [ 00:30:01.279 { 00:30:01.279 "name": null, 00:30:01.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.279 "is_configured": false, 00:30:01.280 "data_offset": 2048, 00:30:01.280 "data_size": 63488 00:30:01.280 }, 00:30:01.280 { 00:30:01.280 "name": null, 00:30:01.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.280 "is_configured": false, 00:30:01.280 "data_offset": 2048, 00:30:01.280 "data_size": 63488 00:30:01.280 }, 00:30:01.280 { 00:30:01.280 "name": "BaseBdev3", 00:30:01.280 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:30:01.280 "is_configured": true, 00:30:01.280 "data_offset": 2048, 00:30:01.280 "data_size": 63488 00:30:01.280 }, 00:30:01.280 { 00:30:01.280 "name": "BaseBdev4", 00:30:01.280 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:30:01.280 "is_configured": true, 00:30:01.280 "data_offset": 2048, 00:30:01.280 "data_size": 63488 00:30:01.280 } 00:30:01.280 ] 00:30:01.280 }' 00:30:01.280 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:01.280 02:35:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:01.849 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:01.849 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:01.849 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:01.849 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:01.849 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:01.849 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.849 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.109 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.109 "name": "raid_bdev1", 00:30:02.109 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:30:02.109 "strip_size_kb": 0, 00:30:02.109 "state": "online", 00:30:02.109 "raid_level": "raid1", 00:30:02.109 "superblock": true, 00:30:02.109 "num_base_bdevs": 4, 00:30:02.109 "num_base_bdevs_discovered": 2, 00:30:02.109 "num_base_bdevs_operational": 2, 00:30:02.109 "base_bdevs_list": [ 00:30:02.109 { 00:30:02.109 "name": null, 00:30:02.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.109 "is_configured": false, 00:30:02.109 "data_offset": 2048, 00:30:02.109 "data_size": 63488 00:30:02.109 }, 00:30:02.109 { 00:30:02.109 "name": null, 00:30:02.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.109 "is_configured": false, 00:30:02.109 "data_offset": 2048, 00:30:02.109 "data_size": 63488 00:30:02.109 }, 00:30:02.109 { 00:30:02.109 "name": "BaseBdev3", 00:30:02.109 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:30:02.109 "is_configured": true, 00:30:02.109 "data_offset": 2048, 00:30:02.109 "data_size": 63488 00:30:02.109 }, 00:30:02.109 { 00:30:02.109 "name": "BaseBdev4", 00:30:02.109 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:30:02.109 "is_configured": true, 00:30:02.109 "data_offset": 2048, 00:30:02.109 "data_size": 63488 00:30:02.109 } 00:30:02.109 ] 00:30:02.109 }' 00:30:02.109 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:30:02.368 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.627 [2024-07-11 02:35:52.814617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:02.627 [2024-07-11 02:35:52.814742] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:02.627 [2024-07-11 02:35:52.814765] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:02.627 request: 00:30:02.627 { 00:30:02.627 "base_bdev": "BaseBdev1", 00:30:02.627 "raid_bdev": "raid_bdev1", 00:30:02.627 "method": "bdev_raid_add_base_bdev", 00:30:02.627 "req_id": 1 00:30:02.627 } 00:30:02.627 Got JSON-RPC error response 00:30:02.627 response: 00:30:02.627 { 00:30:02.627 "code": -22, 00:30:02.627 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:02.627 } 00:30:02.627 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:30:02.627 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:02.627 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:02.627 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:02.627 02:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.562 02:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.820 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:03.820 "name": "raid_bdev1", 00:30:03.820 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:30:03.820 "strip_size_kb": 0, 00:30:03.820 "state": "online", 00:30:03.820 "raid_level": "raid1", 00:30:03.820 "superblock": true, 00:30:03.820 "num_base_bdevs": 4, 00:30:03.820 "num_base_bdevs_discovered": 2, 00:30:03.820 "num_base_bdevs_operational": 2, 00:30:03.820 "base_bdevs_list": [ 00:30:03.820 { 00:30:03.820 "name": null, 00:30:03.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.820 "is_configured": false, 00:30:03.820 "data_offset": 2048, 00:30:03.820 "data_size": 63488 00:30:03.820 }, 00:30:03.820 { 00:30:03.820 "name": null, 00:30:03.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.820 "is_configured": false, 00:30:03.820 "data_offset": 2048, 00:30:03.820 "data_size": 63488 00:30:03.820 }, 00:30:03.820 { 00:30:03.820 "name": "BaseBdev3", 00:30:03.820 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:30:03.820 "is_configured": true, 00:30:03.820 "data_offset": 2048, 00:30:03.820 "data_size": 63488 00:30:03.820 }, 00:30:03.820 { 00:30:03.820 "name": "BaseBdev4", 00:30:03.820 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:30:03.820 "is_configured": true, 00:30:03.820 "data_offset": 2048, 00:30:03.820 "data_size": 63488 00:30:03.820 } 00:30:03.820 ] 00:30:03.820 }' 00:30:03.820 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:03.820 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:04.386 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:04.386 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:04.386 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:04.386 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:04.386 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:04.386 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.387 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.647 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:04.647 "name": "raid_bdev1", 00:30:04.647 "uuid": "c712fa69-1454-4396-b91f-47055bd11565", 00:30:04.647 "strip_size_kb": 0, 00:30:04.647 "state": "online", 00:30:04.647 "raid_level": "raid1", 00:30:04.647 "superblock": true, 00:30:04.647 "num_base_bdevs": 4, 00:30:04.647 "num_base_bdevs_discovered": 2, 00:30:04.647 "num_base_bdevs_operational": 2, 00:30:04.647 "base_bdevs_list": [ 00:30:04.647 { 00:30:04.647 "name": null, 00:30:04.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.647 "is_configured": false, 00:30:04.647 "data_offset": 2048, 00:30:04.647 "data_size": 63488 00:30:04.647 }, 00:30:04.647 { 00:30:04.647 "name": null, 00:30:04.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.647 "is_configured": false, 00:30:04.647 "data_offset": 2048, 00:30:04.647 "data_size": 63488 00:30:04.647 }, 00:30:04.647 { 00:30:04.647 "name": "BaseBdev3", 00:30:04.647 "uuid": "f84a1e5f-6ba6-5acd-b2c2-f08e2ebe0d69", 00:30:04.647 "is_configured": true, 00:30:04.647 "data_offset": 2048, 00:30:04.647 "data_size": 63488 00:30:04.647 }, 00:30:04.647 { 00:30:04.647 "name": "BaseBdev4", 00:30:04.647 "uuid": "3de5ffde-d4b0-54c5-9ead-c05f0ff130dd", 00:30:04.647 "is_configured": true, 00:30:04.647 "data_offset": 2048, 00:30:04.647 "data_size": 63488 00:30:04.647 } 00:30:04.647 ] 00:30:04.647 }' 00:30:04.647 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:04.647 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:04.647 02:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:04.647 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:04.647 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2034915 00:30:04.647 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2034915 ']' 00:30:04.647 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2034915 00:30:04.647 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:30:04.647 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.647 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2034915 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2034915' 00:30:04.915 killing process with pid 2034915 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2034915 00:30:04.915 Received shutdown signal, test time was about 28.868736 seconds 00:30:04.915 00:30:04.915 Latency(us) 00:30:04.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.915 =================================================================================================================== 00:30:04.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.915 [2024-07-11 02:35:55.077895] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:04.915 [2024-07-11 02:35:55.078002] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:04.915 [2024-07-11 02:35:55.078059] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:04.915 [2024-07-11 02:35:55.078073] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26c1f50 name raid_bdev1, state offline 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2034915 00:30:04.915 [2024-07-11 02:35:55.119464] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:04.915 00:30:04.915 real 0m33.191s 00:30:04.915 user 0m52.675s 00:30:04.915 sys 0m5.296s 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.915 02:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:04.915 ************************************ 00:30:04.915 END TEST raid_rebuild_test_sb_io 00:30:04.915 ************************************ 00:30:05.222 02:35:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:05.222 02:35:55 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:30:05.222 02:35:55 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:30:05.222 02:35:55 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:30:05.222 02:35:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:30:05.222 02:35:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.222 02:35:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 ************************************ 00:30:05.222 START TEST raid_state_function_test_sb_4k 00:30:05.222 ************************************ 00:30:05.222 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:30:05.222 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:30:05.222 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:30:05.222 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:05.222 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:05.222 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=2039600 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2039600' 00:30:05.223 Process raid pid: 2039600 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 2039600 /var/tmp/spdk-raid.sock 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2039600 ']' 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:05.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.223 02:35:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:05.223 [2024-07-11 02:35:55.488189] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:05.223 [2024-07-11 02:35:55.488253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.223 [2024-07-11 02:35:55.627193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.488 [2024-07-11 02:35:55.675942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.488 [2024-07-11 02:35:55.732222] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:05.488 [2024-07-11 02:35:55.732248] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:06.055 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.055 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:30:06.055 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:06.314 [2024-07-11 02:35:56.653733] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:06.314 [2024-07-11 02:35:56.653782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:06.314 [2024-07-11 02:35:56.653793] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:06.314 [2024-07-11 02:35:56.653805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.314 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:06.573 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:06.573 "name": "Existed_Raid", 00:30:06.573 "uuid": "310ed6dd-3032-4450-9705-bc4c04fc351e", 00:30:06.573 "strip_size_kb": 0, 00:30:06.573 "state": "configuring", 00:30:06.573 "raid_level": "raid1", 00:30:06.573 "superblock": true, 00:30:06.573 "num_base_bdevs": 2, 00:30:06.573 "num_base_bdevs_discovered": 0, 00:30:06.573 "num_base_bdevs_operational": 2, 00:30:06.573 "base_bdevs_list": [ 00:30:06.573 { 00:30:06.573 "name": "BaseBdev1", 00:30:06.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.573 "is_configured": false, 00:30:06.573 "data_offset": 0, 00:30:06.573 "data_size": 0 00:30:06.573 }, 00:30:06.573 { 00:30:06.573 "name": "BaseBdev2", 00:30:06.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.573 "is_configured": false, 00:30:06.573 "data_offset": 0, 00:30:06.573 "data_size": 0 00:30:06.573 } 00:30:06.573 ] 00:30:06.573 }' 00:30:06.573 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:06.573 02:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:07.141 02:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:07.400 [2024-07-11 02:35:57.704357] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:07.400 [2024-07-11 02:35:57.704390] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d1d710 name Existed_Raid, state configuring 00:30:07.400 02:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:07.659 [2024-07-11 02:35:57.953040] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:07.659 [2024-07-11 02:35:57.953068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:07.659 [2024-07-11 02:35:57.953077] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:07.659 [2024-07-11 02:35:57.953089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:07.659 02:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:30:07.917 [2024-07-11 02:35:58.211391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:07.917 BaseBdev1 00:30:07.917 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:07.917 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:30:07.917 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:07.917 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:30:07.917 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:07.917 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:07.917 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:08.176 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:08.436 [ 00:30:08.436 { 00:30:08.436 "name": "BaseBdev1", 00:30:08.436 "aliases": [ 00:30:08.436 "2819de29-c623-4619-814b-5412294256f5" 00:30:08.436 ], 00:30:08.436 "product_name": "Malloc disk", 00:30:08.436 "block_size": 4096, 00:30:08.436 "num_blocks": 8192, 00:30:08.436 "uuid": "2819de29-c623-4619-814b-5412294256f5", 00:30:08.436 "assigned_rate_limits": { 00:30:08.436 "rw_ios_per_sec": 0, 00:30:08.436 "rw_mbytes_per_sec": 0, 00:30:08.436 "r_mbytes_per_sec": 0, 00:30:08.436 "w_mbytes_per_sec": 0 00:30:08.436 }, 00:30:08.436 "claimed": true, 00:30:08.436 "claim_type": "exclusive_write", 00:30:08.436 "zoned": false, 00:30:08.436 "supported_io_types": { 00:30:08.436 "read": true, 00:30:08.437 "write": true, 00:30:08.437 "unmap": true, 00:30:08.437 "flush": true, 00:30:08.437 "reset": true, 00:30:08.437 "nvme_admin": false, 00:30:08.437 "nvme_io": false, 00:30:08.437 "nvme_io_md": false, 00:30:08.437 "write_zeroes": true, 00:30:08.437 "zcopy": true, 00:30:08.437 "get_zone_info": false, 00:30:08.437 "zone_management": false, 00:30:08.437 "zone_append": false, 00:30:08.437 "compare": false, 00:30:08.437 "compare_and_write": false, 00:30:08.437 "abort": true, 00:30:08.437 "seek_hole": false, 00:30:08.437 "seek_data": false, 00:30:08.437 "copy": true, 00:30:08.437 "nvme_iov_md": false 00:30:08.437 }, 00:30:08.437 "memory_domains": [ 00:30:08.437 { 00:30:08.437 "dma_device_id": "system", 00:30:08.437 "dma_device_type": 1 00:30:08.437 }, 00:30:08.437 { 00:30:08.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.437 "dma_device_type": 2 00:30:08.437 } 00:30:08.437 ], 00:30:08.437 "driver_specific": {} 00:30:08.437 } 00:30:08.437 ] 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.437 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:08.696 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:08.696 "name": "Existed_Raid", 00:30:08.696 "uuid": "5dca5f50-7203-42d9-a91b-be31baa8bc0f", 00:30:08.696 "strip_size_kb": 0, 00:30:08.696 "state": "configuring", 00:30:08.696 "raid_level": "raid1", 00:30:08.696 "superblock": true, 00:30:08.696 "num_base_bdevs": 2, 00:30:08.696 "num_base_bdevs_discovered": 1, 00:30:08.696 "num_base_bdevs_operational": 2, 00:30:08.696 "base_bdevs_list": [ 00:30:08.696 { 00:30:08.696 "name": "BaseBdev1", 00:30:08.696 "uuid": "2819de29-c623-4619-814b-5412294256f5", 00:30:08.696 "is_configured": true, 00:30:08.696 "data_offset": 256, 00:30:08.696 "data_size": 7936 00:30:08.696 }, 00:30:08.696 { 00:30:08.696 "name": "BaseBdev2", 00:30:08.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.696 "is_configured": false, 00:30:08.696 "data_offset": 0, 00:30:08.696 "data_size": 0 00:30:08.696 } 00:30:08.696 ] 00:30:08.696 }' 00:30:08.696 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:08.696 02:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:09.266 02:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:09.525 [2024-07-11 02:35:59.791593] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:09.525 [2024-07-11 02:35:59.791633] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d1d040 name Existed_Raid, state configuring 00:30:09.525 02:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:09.785 [2024-07-11 02:36:00.044305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:09.785 [2024-07-11 02:36:00.045715] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:09.785 [2024-07-11 02:36:00.045748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.785 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.045 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:10.045 "name": "Existed_Raid", 00:30:10.045 "uuid": "e4f0bb45-ee92-46e6-a407-cedec88c3709", 00:30:10.045 "strip_size_kb": 0, 00:30:10.045 "state": "configuring", 00:30:10.045 "raid_level": "raid1", 00:30:10.045 "superblock": true, 00:30:10.045 "num_base_bdevs": 2, 00:30:10.045 "num_base_bdevs_discovered": 1, 00:30:10.045 "num_base_bdevs_operational": 2, 00:30:10.045 "base_bdevs_list": [ 00:30:10.045 { 00:30:10.045 "name": "BaseBdev1", 00:30:10.045 "uuid": "2819de29-c623-4619-814b-5412294256f5", 00:30:10.045 "is_configured": true, 00:30:10.045 "data_offset": 256, 00:30:10.045 "data_size": 7936 00:30:10.045 }, 00:30:10.045 { 00:30:10.045 "name": "BaseBdev2", 00:30:10.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.045 "is_configured": false, 00:30:10.045 "data_offset": 0, 00:30:10.045 "data_size": 0 00:30:10.045 } 00:30:10.045 ] 00:30:10.045 }' 00:30:10.045 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:10.045 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:10.614 02:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:30:10.873 [2024-07-11 02:36:01.154835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:10.873 [2024-07-11 02:36:01.154988] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ecfbd0 00:30:10.873 [2024-07-11 02:36:01.155002] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:10.873 [2024-07-11 02:36:01.155179] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d1f7c0 00:30:10.873 [2024-07-11 02:36:01.155302] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ecfbd0 00:30:10.873 [2024-07-11 02:36:01.155313] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ecfbd0 00:30:10.873 [2024-07-11 02:36:01.155408] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:10.873 BaseBdev2 00:30:10.873 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:10.873 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:30:10.873 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:10.873 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:30:10.873 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:10.873 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:10.873 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:11.132 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:11.391 [ 00:30:11.392 { 00:30:11.392 "name": "BaseBdev2", 00:30:11.392 "aliases": [ 00:30:11.392 "465c9fd0-ee4f-42c6-b1df-a85259af8a0f" 00:30:11.392 ], 00:30:11.392 "product_name": "Malloc disk", 00:30:11.392 "block_size": 4096, 00:30:11.392 "num_blocks": 8192, 00:30:11.392 "uuid": "465c9fd0-ee4f-42c6-b1df-a85259af8a0f", 00:30:11.392 "assigned_rate_limits": { 00:30:11.392 "rw_ios_per_sec": 0, 00:30:11.392 "rw_mbytes_per_sec": 0, 00:30:11.392 "r_mbytes_per_sec": 0, 00:30:11.392 "w_mbytes_per_sec": 0 00:30:11.392 }, 00:30:11.392 "claimed": true, 00:30:11.392 "claim_type": "exclusive_write", 00:30:11.392 "zoned": false, 00:30:11.392 "supported_io_types": { 00:30:11.392 "read": true, 00:30:11.392 "write": true, 00:30:11.392 "unmap": true, 00:30:11.392 "flush": true, 00:30:11.392 "reset": true, 00:30:11.392 "nvme_admin": false, 00:30:11.392 "nvme_io": false, 00:30:11.392 "nvme_io_md": false, 00:30:11.392 "write_zeroes": true, 00:30:11.392 "zcopy": true, 00:30:11.392 "get_zone_info": false, 00:30:11.392 "zone_management": false, 00:30:11.392 "zone_append": false, 00:30:11.392 "compare": false, 00:30:11.392 "compare_and_write": false, 00:30:11.392 "abort": true, 00:30:11.392 "seek_hole": false, 00:30:11.392 "seek_data": false, 00:30:11.392 "copy": true, 00:30:11.392 "nvme_iov_md": false 00:30:11.392 }, 00:30:11.392 "memory_domains": [ 00:30:11.392 { 00:30:11.392 "dma_device_id": "system", 00:30:11.392 "dma_device_type": 1 00:30:11.392 }, 00:30:11.392 { 00:30:11.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.392 "dma_device_type": 2 00:30:11.392 } 00:30:11.392 ], 00:30:11.392 "driver_specific": {} 00:30:11.392 } 00:30:11.392 ] 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.392 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:11.651 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:11.651 "name": "Existed_Raid", 00:30:11.651 "uuid": "e4f0bb45-ee92-46e6-a407-cedec88c3709", 00:30:11.651 "strip_size_kb": 0, 00:30:11.651 "state": "online", 00:30:11.651 "raid_level": "raid1", 00:30:11.651 "superblock": true, 00:30:11.651 "num_base_bdevs": 2, 00:30:11.651 "num_base_bdevs_discovered": 2, 00:30:11.651 "num_base_bdevs_operational": 2, 00:30:11.651 "base_bdevs_list": [ 00:30:11.651 { 00:30:11.651 "name": "BaseBdev1", 00:30:11.651 "uuid": "2819de29-c623-4619-814b-5412294256f5", 00:30:11.651 "is_configured": true, 00:30:11.651 "data_offset": 256, 00:30:11.651 "data_size": 7936 00:30:11.651 }, 00:30:11.651 { 00:30:11.651 "name": "BaseBdev2", 00:30:11.651 "uuid": "465c9fd0-ee4f-42c6-b1df-a85259af8a0f", 00:30:11.651 "is_configured": true, 00:30:11.651 "data_offset": 256, 00:30:11.651 "data_size": 7936 00:30:11.651 } 00:30:11.651 ] 00:30:11.651 }' 00:30:11.651 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:11.651 02:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:12.217 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:12.474 [2024-07-11 02:36:02.735304] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:12.474 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:12.474 "name": "Existed_Raid", 00:30:12.474 "aliases": [ 00:30:12.474 "e4f0bb45-ee92-46e6-a407-cedec88c3709" 00:30:12.474 ], 00:30:12.474 "product_name": "Raid Volume", 00:30:12.474 "block_size": 4096, 00:30:12.474 "num_blocks": 7936, 00:30:12.474 "uuid": "e4f0bb45-ee92-46e6-a407-cedec88c3709", 00:30:12.474 "assigned_rate_limits": { 00:30:12.474 "rw_ios_per_sec": 0, 00:30:12.474 "rw_mbytes_per_sec": 0, 00:30:12.474 "r_mbytes_per_sec": 0, 00:30:12.474 "w_mbytes_per_sec": 0 00:30:12.474 }, 00:30:12.474 "claimed": false, 00:30:12.474 "zoned": false, 00:30:12.474 "supported_io_types": { 00:30:12.474 "read": true, 00:30:12.474 "write": true, 00:30:12.474 "unmap": false, 00:30:12.474 "flush": false, 00:30:12.474 "reset": true, 00:30:12.474 "nvme_admin": false, 00:30:12.474 "nvme_io": false, 00:30:12.474 "nvme_io_md": false, 00:30:12.474 "write_zeroes": true, 00:30:12.474 "zcopy": false, 00:30:12.474 "get_zone_info": false, 00:30:12.474 "zone_management": false, 00:30:12.474 "zone_append": false, 00:30:12.474 "compare": false, 00:30:12.474 "compare_and_write": false, 00:30:12.474 "abort": false, 00:30:12.474 "seek_hole": false, 00:30:12.474 "seek_data": false, 00:30:12.474 "copy": false, 00:30:12.474 "nvme_iov_md": false 00:30:12.474 }, 00:30:12.474 "memory_domains": [ 00:30:12.474 { 00:30:12.474 "dma_device_id": "system", 00:30:12.474 "dma_device_type": 1 00:30:12.474 }, 00:30:12.474 { 00:30:12.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.474 "dma_device_type": 2 00:30:12.474 }, 00:30:12.474 { 00:30:12.474 "dma_device_id": "system", 00:30:12.474 "dma_device_type": 1 00:30:12.474 }, 00:30:12.474 { 00:30:12.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.474 "dma_device_type": 2 00:30:12.474 } 00:30:12.474 ], 00:30:12.474 "driver_specific": { 00:30:12.474 "raid": { 00:30:12.474 "uuid": "e4f0bb45-ee92-46e6-a407-cedec88c3709", 00:30:12.474 "strip_size_kb": 0, 00:30:12.474 "state": "online", 00:30:12.474 "raid_level": "raid1", 00:30:12.475 "superblock": true, 00:30:12.475 "num_base_bdevs": 2, 00:30:12.475 "num_base_bdevs_discovered": 2, 00:30:12.475 "num_base_bdevs_operational": 2, 00:30:12.475 "base_bdevs_list": [ 00:30:12.475 { 00:30:12.475 "name": "BaseBdev1", 00:30:12.475 "uuid": "2819de29-c623-4619-814b-5412294256f5", 00:30:12.475 "is_configured": true, 00:30:12.475 "data_offset": 256, 00:30:12.475 "data_size": 7936 00:30:12.475 }, 00:30:12.475 { 00:30:12.475 "name": "BaseBdev2", 00:30:12.475 "uuid": "465c9fd0-ee4f-42c6-b1df-a85259af8a0f", 00:30:12.475 "is_configured": true, 00:30:12.475 "data_offset": 256, 00:30:12.475 "data_size": 7936 00:30:12.475 } 00:30:12.475 ] 00:30:12.475 } 00:30:12.475 } 00:30:12.475 }' 00:30:12.475 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:12.475 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:12.475 BaseBdev2' 00:30:12.475 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:12.475 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:12.475 02:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:12.733 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:12.733 "name": "BaseBdev1", 00:30:12.733 "aliases": [ 00:30:12.733 "2819de29-c623-4619-814b-5412294256f5" 00:30:12.733 ], 00:30:12.733 "product_name": "Malloc disk", 00:30:12.733 "block_size": 4096, 00:30:12.733 "num_blocks": 8192, 00:30:12.733 "uuid": "2819de29-c623-4619-814b-5412294256f5", 00:30:12.733 "assigned_rate_limits": { 00:30:12.733 "rw_ios_per_sec": 0, 00:30:12.733 "rw_mbytes_per_sec": 0, 00:30:12.733 "r_mbytes_per_sec": 0, 00:30:12.733 "w_mbytes_per_sec": 0 00:30:12.733 }, 00:30:12.733 "claimed": true, 00:30:12.733 "claim_type": "exclusive_write", 00:30:12.733 "zoned": false, 00:30:12.733 "supported_io_types": { 00:30:12.733 "read": true, 00:30:12.733 "write": true, 00:30:12.733 "unmap": true, 00:30:12.733 "flush": true, 00:30:12.733 "reset": true, 00:30:12.733 "nvme_admin": false, 00:30:12.733 "nvme_io": false, 00:30:12.733 "nvme_io_md": false, 00:30:12.733 "write_zeroes": true, 00:30:12.733 "zcopy": true, 00:30:12.733 "get_zone_info": false, 00:30:12.733 "zone_management": false, 00:30:12.733 "zone_append": false, 00:30:12.733 "compare": false, 00:30:12.733 "compare_and_write": false, 00:30:12.733 "abort": true, 00:30:12.733 "seek_hole": false, 00:30:12.733 "seek_data": false, 00:30:12.733 "copy": true, 00:30:12.733 "nvme_iov_md": false 00:30:12.733 }, 00:30:12.733 "memory_domains": [ 00:30:12.733 { 00:30:12.733 "dma_device_id": "system", 00:30:12.733 "dma_device_type": 1 00:30:12.733 }, 00:30:12.733 { 00:30:12.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.733 "dma_device_type": 2 00:30:12.733 } 00:30:12.733 ], 00:30:12.733 "driver_specific": {} 00:30:12.733 }' 00:30:12.733 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:12.733 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:12.733 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:12.733 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:12.992 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:13.251 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:13.251 "name": "BaseBdev2", 00:30:13.251 "aliases": [ 00:30:13.251 "465c9fd0-ee4f-42c6-b1df-a85259af8a0f" 00:30:13.251 ], 00:30:13.251 "product_name": "Malloc disk", 00:30:13.251 "block_size": 4096, 00:30:13.251 "num_blocks": 8192, 00:30:13.251 "uuid": "465c9fd0-ee4f-42c6-b1df-a85259af8a0f", 00:30:13.251 "assigned_rate_limits": { 00:30:13.251 "rw_ios_per_sec": 0, 00:30:13.251 "rw_mbytes_per_sec": 0, 00:30:13.251 "r_mbytes_per_sec": 0, 00:30:13.251 "w_mbytes_per_sec": 0 00:30:13.251 }, 00:30:13.251 "claimed": true, 00:30:13.251 "claim_type": "exclusive_write", 00:30:13.251 "zoned": false, 00:30:13.251 "supported_io_types": { 00:30:13.251 "read": true, 00:30:13.251 "write": true, 00:30:13.251 "unmap": true, 00:30:13.251 "flush": true, 00:30:13.251 "reset": true, 00:30:13.251 "nvme_admin": false, 00:30:13.251 "nvme_io": false, 00:30:13.251 "nvme_io_md": false, 00:30:13.251 "write_zeroes": true, 00:30:13.251 "zcopy": true, 00:30:13.251 "get_zone_info": false, 00:30:13.251 "zone_management": false, 00:30:13.251 "zone_append": false, 00:30:13.251 "compare": false, 00:30:13.251 "compare_and_write": false, 00:30:13.251 "abort": true, 00:30:13.251 "seek_hole": false, 00:30:13.251 "seek_data": false, 00:30:13.251 "copy": true, 00:30:13.251 "nvme_iov_md": false 00:30:13.251 }, 00:30:13.251 "memory_domains": [ 00:30:13.251 { 00:30:13.251 "dma_device_id": "system", 00:30:13.251 "dma_device_type": 1 00:30:13.251 }, 00:30:13.251 { 00:30:13.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.251 "dma_device_type": 2 00:30:13.251 } 00:30:13.251 ], 00:30:13.251 "driver_specific": {} 00:30:13.251 }' 00:30:13.251 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:13.509 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:13.766 02:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:13.766 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:13.766 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:14.024 [2024-07-11 02:36:04.247101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.024 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:14.283 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:14.283 "name": "Existed_Raid", 00:30:14.283 "uuid": "e4f0bb45-ee92-46e6-a407-cedec88c3709", 00:30:14.283 "strip_size_kb": 0, 00:30:14.283 "state": "online", 00:30:14.283 "raid_level": "raid1", 00:30:14.283 "superblock": true, 00:30:14.283 "num_base_bdevs": 2, 00:30:14.283 "num_base_bdevs_discovered": 1, 00:30:14.283 "num_base_bdevs_operational": 1, 00:30:14.283 "base_bdevs_list": [ 00:30:14.283 { 00:30:14.283 "name": null, 00:30:14.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.283 "is_configured": false, 00:30:14.283 "data_offset": 256, 00:30:14.283 "data_size": 7936 00:30:14.283 }, 00:30:14.283 { 00:30:14.283 "name": "BaseBdev2", 00:30:14.283 "uuid": "465c9fd0-ee4f-42c6-b1df-a85259af8a0f", 00:30:14.283 "is_configured": true, 00:30:14.283 "data_offset": 256, 00:30:14.283 "data_size": 7936 00:30:14.283 } 00:30:14.283 ] 00:30:14.283 }' 00:30:14.283 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:14.283 02:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:14.848 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:14.848 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:14.848 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.848 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:15.106 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:15.106 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:15.106 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:15.364 [2024-07-11 02:36:05.631798] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:15.364 [2024-07-11 02:36:05.631895] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:15.364 [2024-07-11 02:36:05.644505] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:15.364 [2024-07-11 02:36:05.644537] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:15.364 [2024-07-11 02:36:05.644550] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ecfbd0 name Existed_Raid, state offline 00:30:15.364 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:15.364 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:15.364 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.364 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 2039600 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2039600 ']' 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2039600 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2039600 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2039600' 00:30:15.623 killing process with pid 2039600 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2039600 00:30:15.623 [2024-07-11 02:36:05.966251] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:15.623 02:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2039600 00:30:15.623 [2024-07-11 02:36:05.967193] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:15.882 02:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:30:15.882 00:30:15.882 real 0m10.756s 00:30:15.882 user 0m19.150s 00:30:15.882 sys 0m2.027s 00:30:15.882 02:36:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:15.882 02:36:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:15.882 ************************************ 00:30:15.882 END TEST raid_state_function_test_sb_4k 00:30:15.882 ************************************ 00:30:15.882 02:36:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:15.882 02:36:06 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:30:15.882 02:36:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:30:15.882 02:36:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:15.882 02:36:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:15.882 ************************************ 00:30:15.882 START TEST raid_superblock_test_4k 00:30:15.882 ************************************ 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=2041718 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 2041718 /var/tmp/spdk-raid.sock 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 2041718 ']' 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:15.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:15.882 02:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:16.141 [2024-07-11 02:36:06.324396] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:16.141 [2024-07-11 02:36:06.324468] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041718 ] 00:30:16.141 [2024-07-11 02:36:06.459845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.141 [2024-07-11 02:36:06.510675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.399 [2024-07-11 02:36:06.567983] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:16.399 [2024-07-11 02:36:06.568031] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:17.333 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:30:17.591 malloc1 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:17.591 [2024-07-11 02:36:07.944119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:17.591 [2024-07-11 02:36:07.944162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.591 [2024-07-11 02:36:07.944181] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa6de0 00:30:17.591 [2024-07-11 02:36:07.944193] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.591 [2024-07-11 02:36:07.945736] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.591 [2024-07-11 02:36:07.945773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:17.591 pt1 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:17.591 02:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:30:17.850 malloc2 00:30:17.850 02:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:18.417 [2024-07-11 02:36:08.714931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:18.417 [2024-07-11 02:36:08.714982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.417 [2024-07-11 02:36:08.715001] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa9e380 00:30:18.417 [2024-07-11 02:36:08.715013] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.417 [2024-07-11 02:36:08.716557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.417 [2024-07-11 02:36:08.716587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:18.417 pt2 00:30:18.417 02:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:30:18.417 02:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:30:18.417 02:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:30:18.992 [2024-07-11 02:36:09.228287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:18.992 [2024-07-11 02:36:09.229616] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:18.992 [2024-07-11 02:36:09.229774] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xaa89e0 00:30:18.992 [2024-07-11 02:36:09.229788] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:18.992 [2024-07-11 02:36:09.229985] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa9fa70 00:30:18.992 [2024-07-11 02:36:09.230134] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xaa89e0 00:30:18.992 [2024-07-11 02:36:09.230144] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xaa89e0 00:30:18.992 [2024-07-11 02:36:09.230245] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.992 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.250 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:19.250 "name": "raid_bdev1", 00:30:19.250 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:19.250 "strip_size_kb": 0, 00:30:19.250 "state": "online", 00:30:19.250 "raid_level": "raid1", 00:30:19.250 "superblock": true, 00:30:19.250 "num_base_bdevs": 2, 00:30:19.250 "num_base_bdevs_discovered": 2, 00:30:19.250 "num_base_bdevs_operational": 2, 00:30:19.250 "base_bdevs_list": [ 00:30:19.250 { 00:30:19.250 "name": "pt1", 00:30:19.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:19.250 "is_configured": true, 00:30:19.250 "data_offset": 256, 00:30:19.250 "data_size": 7936 00:30:19.250 }, 00:30:19.250 { 00:30:19.250 "name": "pt2", 00:30:19.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:19.250 "is_configured": true, 00:30:19.250 "data_offset": 256, 00:30:19.250 "data_size": 7936 00:30:19.250 } 00:30:19.250 ] 00:30:19.250 }' 00:30:19.250 02:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:19.250 02:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:19.816 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:20.074 [2024-07-11 02:36:10.251222] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:20.074 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:20.074 "name": "raid_bdev1", 00:30:20.074 "aliases": [ 00:30:20.074 "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6" 00:30:20.074 ], 00:30:20.074 "product_name": "Raid Volume", 00:30:20.074 "block_size": 4096, 00:30:20.074 "num_blocks": 7936, 00:30:20.074 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:20.074 "assigned_rate_limits": { 00:30:20.074 "rw_ios_per_sec": 0, 00:30:20.074 "rw_mbytes_per_sec": 0, 00:30:20.074 "r_mbytes_per_sec": 0, 00:30:20.074 "w_mbytes_per_sec": 0 00:30:20.074 }, 00:30:20.074 "claimed": false, 00:30:20.074 "zoned": false, 00:30:20.074 "supported_io_types": { 00:30:20.074 "read": true, 00:30:20.074 "write": true, 00:30:20.074 "unmap": false, 00:30:20.074 "flush": false, 00:30:20.074 "reset": true, 00:30:20.074 "nvme_admin": false, 00:30:20.074 "nvme_io": false, 00:30:20.074 "nvme_io_md": false, 00:30:20.074 "write_zeroes": true, 00:30:20.075 "zcopy": false, 00:30:20.075 "get_zone_info": false, 00:30:20.075 "zone_management": false, 00:30:20.075 "zone_append": false, 00:30:20.075 "compare": false, 00:30:20.075 "compare_and_write": false, 00:30:20.075 "abort": false, 00:30:20.075 "seek_hole": false, 00:30:20.075 "seek_data": false, 00:30:20.075 "copy": false, 00:30:20.075 "nvme_iov_md": false 00:30:20.075 }, 00:30:20.075 "memory_domains": [ 00:30:20.075 { 00:30:20.075 "dma_device_id": "system", 00:30:20.075 "dma_device_type": 1 00:30:20.075 }, 00:30:20.075 { 00:30:20.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.075 "dma_device_type": 2 00:30:20.075 }, 00:30:20.075 { 00:30:20.075 "dma_device_id": "system", 00:30:20.075 "dma_device_type": 1 00:30:20.075 }, 00:30:20.075 { 00:30:20.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.075 "dma_device_type": 2 00:30:20.075 } 00:30:20.075 ], 00:30:20.075 "driver_specific": { 00:30:20.075 "raid": { 00:30:20.075 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:20.075 "strip_size_kb": 0, 00:30:20.075 "state": "online", 00:30:20.075 "raid_level": "raid1", 00:30:20.075 "superblock": true, 00:30:20.075 "num_base_bdevs": 2, 00:30:20.075 "num_base_bdevs_discovered": 2, 00:30:20.075 "num_base_bdevs_operational": 2, 00:30:20.075 "base_bdevs_list": [ 00:30:20.075 { 00:30:20.075 "name": "pt1", 00:30:20.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:20.075 "is_configured": true, 00:30:20.075 "data_offset": 256, 00:30:20.075 "data_size": 7936 00:30:20.075 }, 00:30:20.075 { 00:30:20.075 "name": "pt2", 00:30:20.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:20.075 "is_configured": true, 00:30:20.075 "data_offset": 256, 00:30:20.075 "data_size": 7936 00:30:20.075 } 00:30:20.075 ] 00:30:20.075 } 00:30:20.075 } 00:30:20.075 }' 00:30:20.075 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:20.075 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:20.075 pt2' 00:30:20.075 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:20.075 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:20.075 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:20.333 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:20.333 "name": "pt1", 00:30:20.333 "aliases": [ 00:30:20.333 "00000000-0000-0000-0000-000000000001" 00:30:20.333 ], 00:30:20.333 "product_name": "passthru", 00:30:20.333 "block_size": 4096, 00:30:20.333 "num_blocks": 8192, 00:30:20.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:20.333 "assigned_rate_limits": { 00:30:20.333 "rw_ios_per_sec": 0, 00:30:20.333 "rw_mbytes_per_sec": 0, 00:30:20.333 "r_mbytes_per_sec": 0, 00:30:20.333 "w_mbytes_per_sec": 0 00:30:20.333 }, 00:30:20.333 "claimed": true, 00:30:20.333 "claim_type": "exclusive_write", 00:30:20.333 "zoned": false, 00:30:20.333 "supported_io_types": { 00:30:20.333 "read": true, 00:30:20.333 "write": true, 00:30:20.333 "unmap": true, 00:30:20.333 "flush": true, 00:30:20.333 "reset": true, 00:30:20.333 "nvme_admin": false, 00:30:20.333 "nvme_io": false, 00:30:20.333 "nvme_io_md": false, 00:30:20.333 "write_zeroes": true, 00:30:20.333 "zcopy": true, 00:30:20.333 "get_zone_info": false, 00:30:20.333 "zone_management": false, 00:30:20.333 "zone_append": false, 00:30:20.333 "compare": false, 00:30:20.333 "compare_and_write": false, 00:30:20.333 "abort": true, 00:30:20.333 "seek_hole": false, 00:30:20.333 "seek_data": false, 00:30:20.333 "copy": true, 00:30:20.333 "nvme_iov_md": false 00:30:20.333 }, 00:30:20.333 "memory_domains": [ 00:30:20.333 { 00:30:20.333 "dma_device_id": "system", 00:30:20.333 "dma_device_type": 1 00:30:20.333 }, 00:30:20.333 { 00:30:20.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.333 "dma_device_type": 2 00:30:20.333 } 00:30:20.333 ], 00:30:20.333 "driver_specific": { 00:30:20.333 "passthru": { 00:30:20.333 "name": "pt1", 00:30:20.333 "base_bdev_name": "malloc1" 00:30:20.333 } 00:30:20.333 } 00:30:20.333 }' 00:30:20.334 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:20.334 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:20.334 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:20.334 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:20.334 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:20.592 02:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:20.850 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:20.850 "name": "pt2", 00:30:20.850 "aliases": [ 00:30:20.850 "00000000-0000-0000-0000-000000000002" 00:30:20.850 ], 00:30:20.850 "product_name": "passthru", 00:30:20.850 "block_size": 4096, 00:30:20.850 "num_blocks": 8192, 00:30:20.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:20.850 "assigned_rate_limits": { 00:30:20.850 "rw_ios_per_sec": 0, 00:30:20.850 "rw_mbytes_per_sec": 0, 00:30:20.850 "r_mbytes_per_sec": 0, 00:30:20.850 "w_mbytes_per_sec": 0 00:30:20.850 }, 00:30:20.850 "claimed": true, 00:30:20.850 "claim_type": "exclusive_write", 00:30:20.850 "zoned": false, 00:30:20.850 "supported_io_types": { 00:30:20.850 "read": true, 00:30:20.850 "write": true, 00:30:20.850 "unmap": true, 00:30:20.851 "flush": true, 00:30:20.851 "reset": true, 00:30:20.851 "nvme_admin": false, 00:30:20.851 "nvme_io": false, 00:30:20.851 "nvme_io_md": false, 00:30:20.851 "write_zeroes": true, 00:30:20.851 "zcopy": true, 00:30:20.851 "get_zone_info": false, 00:30:20.851 "zone_management": false, 00:30:20.851 "zone_append": false, 00:30:20.851 "compare": false, 00:30:20.851 "compare_and_write": false, 00:30:20.851 "abort": true, 00:30:20.851 "seek_hole": false, 00:30:20.851 "seek_data": false, 00:30:20.851 "copy": true, 00:30:20.851 "nvme_iov_md": false 00:30:20.851 }, 00:30:20.851 "memory_domains": [ 00:30:20.851 { 00:30:20.851 "dma_device_id": "system", 00:30:20.851 "dma_device_type": 1 00:30:20.851 }, 00:30:20.851 { 00:30:20.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.851 "dma_device_type": 2 00:30:20.851 } 00:30:20.851 ], 00:30:20.851 "driver_specific": { 00:30:20.851 "passthru": { 00:30:20.851 "name": "pt2", 00:30:20.851 "base_bdev_name": "malloc2" 00:30:20.851 } 00:30:20.851 } 00:30:20.851 }' 00:30:20.851 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:20.851 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:20.851 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:20.851 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:21.109 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:30:21.367 [2024-07-11 02:36:11.755210] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:21.367 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6 00:30:21.367 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6 ']' 00:30:21.367 02:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:21.933 [2024-07-11 02:36:12.272340] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:21.933 [2024-07-11 02:36:12.272364] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:21.933 [2024-07-11 02:36:12.272422] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:21.933 [2024-07-11 02:36:12.272475] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:21.933 [2024-07-11 02:36:12.272486] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaa89e0 name raid_bdev1, state offline 00:30:21.933 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.933 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:30:22.192 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:30:22.192 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:30:22.192 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:30:22.192 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:22.450 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:30:22.450 02:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:22.708 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:30:22.708 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:22.966 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:30:22.966 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:22.966 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:30:22.967 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:23.225 [2024-07-11 02:36:13.507567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:23.226 [2024-07-11 02:36:13.508871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:23.226 [2024-07-11 02:36:13.508924] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:23.226 [2024-07-11 02:36:13.508963] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:23.226 [2024-07-11 02:36:13.508982] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:23.226 [2024-07-11 02:36:13.508992] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa9fbd0 name raid_bdev1, state configuring 00:30:23.226 request: 00:30:23.226 { 00:30:23.226 "name": "raid_bdev1", 00:30:23.226 "raid_level": "raid1", 00:30:23.226 "base_bdevs": [ 00:30:23.226 "malloc1", 00:30:23.226 "malloc2" 00:30:23.226 ], 00:30:23.226 "superblock": false, 00:30:23.226 "method": "bdev_raid_create", 00:30:23.226 "req_id": 1 00:30:23.226 } 00:30:23.226 Got JSON-RPC error response 00:30:23.226 response: 00:30:23.226 { 00:30:23.226 "code": -17, 00:30:23.226 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:23.226 } 00:30:23.226 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:30:23.226 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:23.226 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:23.226 02:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:23.226 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.226 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:30:23.484 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:30:23.484 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:30:23.484 02:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:23.742 [2024-07-11 02:36:13.988786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:23.742 [2024-07-11 02:36:13.988829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:23.742 [2024-07-11 02:36:13.988850] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa7f10 00:30:23.742 [2024-07-11 02:36:13.988862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:23.742 [2024-07-11 02:36:13.990419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:23.742 [2024-07-11 02:36:13.990449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:23.742 [2024-07-11 02:36:13.990511] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:23.742 [2024-07-11 02:36:13.990536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:23.742 pt1 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.742 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.000 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.000 "name": "raid_bdev1", 00:30:24.000 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:24.000 "strip_size_kb": 0, 00:30:24.000 "state": "configuring", 00:30:24.000 "raid_level": "raid1", 00:30:24.000 "superblock": true, 00:30:24.000 "num_base_bdevs": 2, 00:30:24.000 "num_base_bdevs_discovered": 1, 00:30:24.000 "num_base_bdevs_operational": 2, 00:30:24.000 "base_bdevs_list": [ 00:30:24.000 { 00:30:24.000 "name": "pt1", 00:30:24.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:24.000 "is_configured": true, 00:30:24.000 "data_offset": 256, 00:30:24.000 "data_size": 7936 00:30:24.000 }, 00:30:24.000 { 00:30:24.000 "name": null, 00:30:24.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:24.000 "is_configured": false, 00:30:24.000 "data_offset": 256, 00:30:24.000 "data_size": 7936 00:30:24.000 } 00:30:24.000 ] 00:30:24.000 }' 00:30:24.000 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.000 02:36:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:24.565 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:30:24.565 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:30:24.565 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:30:24.565 02:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:24.824 [2024-07-11 02:36:15.087716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:24.824 [2024-07-11 02:36:15.087769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.824 [2024-07-11 02:36:15.087788] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa3410 00:30:24.824 [2024-07-11 02:36:15.087800] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.824 [2024-07-11 02:36:15.088119] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.824 [2024-07-11 02:36:15.088137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:24.824 [2024-07-11 02:36:15.088197] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:24.824 [2024-07-11 02:36:15.088215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:24.824 [2024-07-11 02:36:15.088309] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8f3900 00:30:24.824 [2024-07-11 02:36:15.088320] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:24.824 [2024-07-11 02:36:15.088476] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x98c360 00:30:24.824 [2024-07-11 02:36:15.088598] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8f3900 00:30:24.824 [2024-07-11 02:36:15.088608] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8f3900 00:30:24.824 [2024-07-11 02:36:15.088703] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:24.824 pt2 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.824 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.083 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.083 "name": "raid_bdev1", 00:30:25.083 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:25.083 "strip_size_kb": 0, 00:30:25.083 "state": "online", 00:30:25.083 "raid_level": "raid1", 00:30:25.083 "superblock": true, 00:30:25.083 "num_base_bdevs": 2, 00:30:25.083 "num_base_bdevs_discovered": 2, 00:30:25.083 "num_base_bdevs_operational": 2, 00:30:25.083 "base_bdevs_list": [ 00:30:25.083 { 00:30:25.083 "name": "pt1", 00:30:25.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:25.083 "is_configured": true, 00:30:25.083 "data_offset": 256, 00:30:25.084 "data_size": 7936 00:30:25.084 }, 00:30:25.084 { 00:30:25.084 "name": "pt2", 00:30:25.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:25.084 "is_configured": true, 00:30:25.084 "data_offset": 256, 00:30:25.084 "data_size": 7936 00:30:25.084 } 00:30:25.084 ] 00:30:25.084 }' 00:30:25.084 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.084 02:36:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:25.652 02:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:25.911 [2024-07-11 02:36:16.190893] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:25.911 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:25.911 "name": "raid_bdev1", 00:30:25.911 "aliases": [ 00:30:25.911 "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6" 00:30:25.911 ], 00:30:25.911 "product_name": "Raid Volume", 00:30:25.911 "block_size": 4096, 00:30:25.911 "num_blocks": 7936, 00:30:25.911 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:25.911 "assigned_rate_limits": { 00:30:25.911 "rw_ios_per_sec": 0, 00:30:25.911 "rw_mbytes_per_sec": 0, 00:30:25.911 "r_mbytes_per_sec": 0, 00:30:25.911 "w_mbytes_per_sec": 0 00:30:25.911 }, 00:30:25.911 "claimed": false, 00:30:25.911 "zoned": false, 00:30:25.911 "supported_io_types": { 00:30:25.911 "read": true, 00:30:25.911 "write": true, 00:30:25.911 "unmap": false, 00:30:25.911 "flush": false, 00:30:25.911 "reset": true, 00:30:25.911 "nvme_admin": false, 00:30:25.911 "nvme_io": false, 00:30:25.911 "nvme_io_md": false, 00:30:25.911 "write_zeroes": true, 00:30:25.911 "zcopy": false, 00:30:25.911 "get_zone_info": false, 00:30:25.911 "zone_management": false, 00:30:25.911 "zone_append": false, 00:30:25.911 "compare": false, 00:30:25.911 "compare_and_write": false, 00:30:25.911 "abort": false, 00:30:25.911 "seek_hole": false, 00:30:25.911 "seek_data": false, 00:30:25.911 "copy": false, 00:30:25.911 "nvme_iov_md": false 00:30:25.911 }, 00:30:25.911 "memory_domains": [ 00:30:25.911 { 00:30:25.911 "dma_device_id": "system", 00:30:25.911 "dma_device_type": 1 00:30:25.911 }, 00:30:25.911 { 00:30:25.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:25.911 "dma_device_type": 2 00:30:25.911 }, 00:30:25.911 { 00:30:25.911 "dma_device_id": "system", 00:30:25.911 "dma_device_type": 1 00:30:25.911 }, 00:30:25.911 { 00:30:25.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:25.911 "dma_device_type": 2 00:30:25.911 } 00:30:25.911 ], 00:30:25.911 "driver_specific": { 00:30:25.911 "raid": { 00:30:25.911 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:25.911 "strip_size_kb": 0, 00:30:25.911 "state": "online", 00:30:25.911 "raid_level": "raid1", 00:30:25.911 "superblock": true, 00:30:25.911 "num_base_bdevs": 2, 00:30:25.911 "num_base_bdevs_discovered": 2, 00:30:25.911 "num_base_bdevs_operational": 2, 00:30:25.912 "base_bdevs_list": [ 00:30:25.912 { 00:30:25.912 "name": "pt1", 00:30:25.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:25.912 "is_configured": true, 00:30:25.912 "data_offset": 256, 00:30:25.912 "data_size": 7936 00:30:25.912 }, 00:30:25.912 { 00:30:25.912 "name": "pt2", 00:30:25.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:25.912 "is_configured": true, 00:30:25.912 "data_offset": 256, 00:30:25.912 "data_size": 7936 00:30:25.912 } 00:30:25.912 ] 00:30:25.912 } 00:30:25.912 } 00:30:25.912 }' 00:30:25.912 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:25.912 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:25.912 pt2' 00:30:25.912 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:25.912 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:25.912 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:26.236 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:26.236 "name": "pt1", 00:30:26.236 "aliases": [ 00:30:26.236 "00000000-0000-0000-0000-000000000001" 00:30:26.236 ], 00:30:26.236 "product_name": "passthru", 00:30:26.236 "block_size": 4096, 00:30:26.236 "num_blocks": 8192, 00:30:26.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:26.236 "assigned_rate_limits": { 00:30:26.236 "rw_ios_per_sec": 0, 00:30:26.236 "rw_mbytes_per_sec": 0, 00:30:26.236 "r_mbytes_per_sec": 0, 00:30:26.236 "w_mbytes_per_sec": 0 00:30:26.236 }, 00:30:26.236 "claimed": true, 00:30:26.236 "claim_type": "exclusive_write", 00:30:26.236 "zoned": false, 00:30:26.236 "supported_io_types": { 00:30:26.236 "read": true, 00:30:26.236 "write": true, 00:30:26.236 "unmap": true, 00:30:26.236 "flush": true, 00:30:26.236 "reset": true, 00:30:26.236 "nvme_admin": false, 00:30:26.236 "nvme_io": false, 00:30:26.236 "nvme_io_md": false, 00:30:26.236 "write_zeroes": true, 00:30:26.236 "zcopy": true, 00:30:26.236 "get_zone_info": false, 00:30:26.236 "zone_management": false, 00:30:26.236 "zone_append": false, 00:30:26.236 "compare": false, 00:30:26.236 "compare_and_write": false, 00:30:26.236 "abort": true, 00:30:26.236 "seek_hole": false, 00:30:26.236 "seek_data": false, 00:30:26.236 "copy": true, 00:30:26.236 "nvme_iov_md": false 00:30:26.236 }, 00:30:26.236 "memory_domains": [ 00:30:26.236 { 00:30:26.236 "dma_device_id": "system", 00:30:26.236 "dma_device_type": 1 00:30:26.236 }, 00:30:26.236 { 00:30:26.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.236 "dma_device_type": 2 00:30:26.236 } 00:30:26.236 ], 00:30:26.236 "driver_specific": { 00:30:26.236 "passthru": { 00:30:26.236 "name": "pt1", 00:30:26.236 "base_bdev_name": "malloc1" 00:30:26.236 } 00:30:26.236 } 00:30:26.236 }' 00:30:26.236 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:26.236 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:26.236 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:26.236 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:26.236 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:26.495 02:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:26.754 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:26.754 "name": "pt2", 00:30:26.754 "aliases": [ 00:30:26.754 "00000000-0000-0000-0000-000000000002" 00:30:26.754 ], 00:30:26.754 "product_name": "passthru", 00:30:26.754 "block_size": 4096, 00:30:26.754 "num_blocks": 8192, 00:30:26.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:26.754 "assigned_rate_limits": { 00:30:26.754 "rw_ios_per_sec": 0, 00:30:26.754 "rw_mbytes_per_sec": 0, 00:30:26.754 "r_mbytes_per_sec": 0, 00:30:26.754 "w_mbytes_per_sec": 0 00:30:26.754 }, 00:30:26.754 "claimed": true, 00:30:26.754 "claim_type": "exclusive_write", 00:30:26.754 "zoned": false, 00:30:26.754 "supported_io_types": { 00:30:26.754 "read": true, 00:30:26.754 "write": true, 00:30:26.754 "unmap": true, 00:30:26.754 "flush": true, 00:30:26.754 "reset": true, 00:30:26.754 "nvme_admin": false, 00:30:26.754 "nvme_io": false, 00:30:26.754 "nvme_io_md": false, 00:30:26.754 "write_zeroes": true, 00:30:26.754 "zcopy": true, 00:30:26.754 "get_zone_info": false, 00:30:26.754 "zone_management": false, 00:30:26.754 "zone_append": false, 00:30:26.754 "compare": false, 00:30:26.754 "compare_and_write": false, 00:30:26.754 "abort": true, 00:30:26.754 "seek_hole": false, 00:30:26.754 "seek_data": false, 00:30:26.754 "copy": true, 00:30:26.754 "nvme_iov_md": false 00:30:26.754 }, 00:30:26.754 "memory_domains": [ 00:30:26.754 { 00:30:26.754 "dma_device_id": "system", 00:30:26.754 "dma_device_type": 1 00:30:26.754 }, 00:30:26.754 { 00:30:26.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.754 "dma_device_type": 2 00:30:26.754 } 00:30:26.754 ], 00:30:26.754 "driver_specific": { 00:30:26.754 "passthru": { 00:30:26.754 "name": "pt2", 00:30:26.754 "base_bdev_name": "malloc2" 00:30:26.754 } 00:30:26.754 } 00:30:26.754 }' 00:30:26.754 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:26.754 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:27.012 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:27.013 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:27.271 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:27.271 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:27.271 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:30:27.529 [2024-07-11 02:36:17.694894] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:27.529 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6 '!=' ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6 ']' 00:30:27.529 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:30:27.529 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:27.529 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:30:27.529 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:27.529 [2024-07-11 02:36:17.939313] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.788 02:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.788 02:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:27.788 "name": "raid_bdev1", 00:30:27.788 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:27.788 "strip_size_kb": 0, 00:30:27.788 "state": "online", 00:30:27.788 "raid_level": "raid1", 00:30:27.788 "superblock": true, 00:30:27.788 "num_base_bdevs": 2, 00:30:27.788 "num_base_bdevs_discovered": 1, 00:30:27.788 "num_base_bdevs_operational": 1, 00:30:27.789 "base_bdevs_list": [ 00:30:27.789 { 00:30:27.789 "name": null, 00:30:27.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.789 "is_configured": false, 00:30:27.789 "data_offset": 256, 00:30:27.789 "data_size": 7936 00:30:27.789 }, 00:30:27.789 { 00:30:27.789 "name": "pt2", 00:30:27.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:27.789 "is_configured": true, 00:30:27.789 "data_offset": 256, 00:30:27.789 "data_size": 7936 00:30:27.789 } 00:30:27.789 ] 00:30:27.789 }' 00:30:27.789 02:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:27.789 02:36:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:28.724 02:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:28.724 [2024-07-11 02:36:18.953988] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:28.724 [2024-07-11 02:36:18.954014] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:28.724 [2024-07-11 02:36:18.954063] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:28.724 [2024-07-11 02:36:18.954104] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:28.724 [2024-07-11 02:36:18.954116] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8f3900 name raid_bdev1, state offline 00:30:28.724 02:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.724 02:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:30:28.986 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:29.245 [2024-07-11 02:36:19.487386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:29.245 [2024-07-11 02:36:19.487435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:29.245 [2024-07-11 02:36:19.487453] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa2f90 00:30:29.245 [2024-07-11 02:36:19.487466] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:29.245 [2024-07-11 02:36:19.489002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:29.245 [2024-07-11 02:36:19.489030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:29.245 [2024-07-11 02:36:19.489090] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:29.245 [2024-07-11 02:36:19.489113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:29.245 [2024-07-11 02:36:19.489193] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8f4040 00:30:29.245 [2024-07-11 02:36:19.489203] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:29.245 [2024-07-11 02:36:19.489365] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaa1de0 00:30:29.245 [2024-07-11 02:36:19.489482] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8f4040 00:30:29.245 [2024-07-11 02:36:19.489491] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8f4040 00:30:29.245 [2024-07-11 02:36:19.489582] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:29.245 pt2 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.245 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.504 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.504 "name": "raid_bdev1", 00:30:29.504 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:29.504 "strip_size_kb": 0, 00:30:29.504 "state": "online", 00:30:29.504 "raid_level": "raid1", 00:30:29.504 "superblock": true, 00:30:29.504 "num_base_bdevs": 2, 00:30:29.504 "num_base_bdevs_discovered": 1, 00:30:29.504 "num_base_bdevs_operational": 1, 00:30:29.504 "base_bdevs_list": [ 00:30:29.504 { 00:30:29.504 "name": null, 00:30:29.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.504 "is_configured": false, 00:30:29.504 "data_offset": 256, 00:30:29.504 "data_size": 7936 00:30:29.504 }, 00:30:29.504 { 00:30:29.504 "name": "pt2", 00:30:29.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:29.504 "is_configured": true, 00:30:29.504 "data_offset": 256, 00:30:29.504 "data_size": 7936 00:30:29.504 } 00:30:29.504 ] 00:30:29.504 }' 00:30:29.504 02:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.504 02:36:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:30.072 02:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:30.332 [2024-07-11 02:36:20.518235] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:30.332 [2024-07-11 02:36:20.518263] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:30.332 [2024-07-11 02:36:20.518319] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:30.332 [2024-07-11 02:36:20.518360] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:30.332 [2024-07-11 02:36:20.518372] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8f4040 name raid_bdev1, state offline 00:30:30.332 02:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.332 02:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:30:30.591 02:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:30:30.591 02:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:30:30.591 02:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:30:30.591 02:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:30.850 [2024-07-11 02:36:21.015523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:30.850 [2024-07-11 02:36:21.015568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:30.850 [2024-07-11 02:36:21.015585] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa31c0 00:30:30.850 [2024-07-11 02:36:21.015598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:30.850 [2024-07-11 02:36:21.017135] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:30.850 [2024-07-11 02:36:21.017166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:30.850 [2024-07-11 02:36:21.017227] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:30.850 [2024-07-11 02:36:21.017251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:30.850 [2024-07-11 02:36:21.017346] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:30.850 [2024-07-11 02:36:21.017359] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:30.850 [2024-07-11 02:36:21.017372] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaa1a30 name raid_bdev1, state configuring 00:30:30.850 [2024-07-11 02:36:21.017395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:30.851 [2024-07-11 02:36:21.017450] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xaa1cb0 00:30:30.851 [2024-07-11 02:36:21.017460] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:30.851 [2024-07-11 02:36:21.017621] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa9e7d0 00:30:30.851 [2024-07-11 02:36:21.017738] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xaa1cb0 00:30:30.851 [2024-07-11 02:36:21.017748] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xaa1cb0 00:30:30.851 [2024-07-11 02:36:21.017849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:30.851 pt1 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.851 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.110 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:31.110 "name": "raid_bdev1", 00:30:31.110 "uuid": "ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6", 00:30:31.110 "strip_size_kb": 0, 00:30:31.110 "state": "online", 00:30:31.110 "raid_level": "raid1", 00:30:31.110 "superblock": true, 00:30:31.110 "num_base_bdevs": 2, 00:30:31.110 "num_base_bdevs_discovered": 1, 00:30:31.110 "num_base_bdevs_operational": 1, 00:30:31.110 "base_bdevs_list": [ 00:30:31.110 { 00:30:31.110 "name": null, 00:30:31.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.110 "is_configured": false, 00:30:31.110 "data_offset": 256, 00:30:31.110 "data_size": 7936 00:30:31.110 }, 00:30:31.110 { 00:30:31.110 "name": "pt2", 00:30:31.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:31.110 "is_configured": true, 00:30:31.110 "data_offset": 256, 00:30:31.110 "data_size": 7936 00:30:31.110 } 00:30:31.110 ] 00:30:31.110 }' 00:30:31.110 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:31.110 02:36:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:31.678 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:30:31.678 02:36:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:31.937 02:36:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:30:31.937 02:36:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:31.937 02:36:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:30:31.937 [2024-07-11 02:36:22.347270] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6 '!=' ca4b5ddf-4cdb-441d-8bbb-fd2362dd69a6 ']' 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 2041718 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 2041718 ']' 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 2041718 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2041718 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2041718' 00:30:32.206 killing process with pid 2041718 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 2041718 00:30:32.206 [2024-07-11 02:36:22.421017] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:32.206 [2024-07-11 02:36:22.421072] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:32.206 [2024-07-11 02:36:22.421116] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:32.206 [2024-07-11 02:36:22.421128] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaa1cb0 name raid_bdev1, state offline 00:30:32.206 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 2041718 00:30:32.206 [2024-07-11 02:36:22.438792] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:32.467 02:36:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:30:32.467 00:30:32.467 real 0m16.374s 00:30:32.467 user 0m29.771s 00:30:32.467 sys 0m3.057s 00:30:32.467 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.467 02:36:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:32.467 ************************************ 00:30:32.467 END TEST raid_superblock_test_4k 00:30:32.467 ************************************ 00:30:32.467 02:36:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:32.467 02:36:22 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:30:32.467 02:36:22 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:30:32.467 02:36:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:32.467 02:36:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.467 02:36:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:32.467 ************************************ 00:30:32.467 START TEST raid_rebuild_test_sb_4k 00:30:32.467 ************************************ 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=2044157 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 2044157 /var/tmp/spdk-raid.sock 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2044157 ']' 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:32.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:32.467 02:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:32.467 [2024-07-11 02:36:22.799654] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:32.467 [2024-07-11 02:36:22.799721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044157 ] 00:30:32.467 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:32.467 Zero copy mechanism will not be used. 00:30:32.727 [2024-07-11 02:36:22.939753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.727 [2024-07-11 02:36:22.988378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.727 [2024-07-11 02:36:23.043489] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:32.727 [2024-07-11 02:36:23.043522] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.664 02:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.664 02:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:30:33.664 02:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:33.664 02:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:30:33.664 BaseBdev1_malloc 00:30:33.664 02:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:33.924 [2024-07-11 02:36:24.163112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:33.924 [2024-07-11 02:36:24.163159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.924 [2024-07-11 02:36:24.163182] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12abee0 00:30:33.924 [2024-07-11 02:36:24.163194] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.924 [2024-07-11 02:36:24.164828] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.924 [2024-07-11 02:36:24.164858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:33.924 BaseBdev1 00:30:33.924 02:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:33.924 02:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:30:34.182 BaseBdev2_malloc 00:30:34.182 02:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:34.441 [2024-07-11 02:36:24.685338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:34.441 [2024-07-11 02:36:24.685386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:34.441 [2024-07-11 02:36:24.685406] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ad870 00:30:34.441 [2024-07-11 02:36:24.685418] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:34.441 [2024-07-11 02:36:24.686806] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:34.441 [2024-07-11 02:36:24.686834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:34.441 BaseBdev2 00:30:34.441 02:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:30:34.699 spare_malloc 00:30:34.700 02:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:34.958 spare_delay 00:30:34.958 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:35.217 [2024-07-11 02:36:25.399808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:35.217 [2024-07-11 02:36:25.399856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.217 [2024-07-11 02:36:25.399877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a81d0 00:30:35.217 [2024-07-11 02:36:25.399890] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.217 [2024-07-11 02:36:25.401330] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.217 [2024-07-11 02:36:25.401361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:35.217 spare 00:30:35.217 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:35.475 [2024-07-11 02:36:25.648478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:35.475 [2024-07-11 02:36:25.649647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:35.475 [2024-07-11 02:36:25.649807] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a8d30 00:30:35.475 [2024-07-11 02:36:25.649820] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:35.475 [2024-07-11 02:36:25.650006] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12aa6d0 00:30:35.475 [2024-07-11 02:36:25.650138] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a8d30 00:30:35.475 [2024-07-11 02:36:25.650148] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a8d30 00:30:35.475 [2024-07-11 02:36:25.650239] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.475 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:35.475 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.476 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.733 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:35.733 "name": "raid_bdev1", 00:30:35.733 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:35.733 "strip_size_kb": 0, 00:30:35.733 "state": "online", 00:30:35.733 "raid_level": "raid1", 00:30:35.733 "superblock": true, 00:30:35.733 "num_base_bdevs": 2, 00:30:35.733 "num_base_bdevs_discovered": 2, 00:30:35.733 "num_base_bdevs_operational": 2, 00:30:35.733 "base_bdevs_list": [ 00:30:35.733 { 00:30:35.733 "name": "BaseBdev1", 00:30:35.733 "uuid": "5e9010a3-cd0c-58b0-9b4d-95beb84e9cd5", 00:30:35.733 "is_configured": true, 00:30:35.733 "data_offset": 256, 00:30:35.733 "data_size": 7936 00:30:35.733 }, 00:30:35.733 { 00:30:35.733 "name": "BaseBdev2", 00:30:35.733 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:35.733 "is_configured": true, 00:30:35.733 "data_offset": 256, 00:30:35.733 "data_size": 7936 00:30:35.733 } 00:30:35.733 ] 00:30:35.733 }' 00:30:35.733 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:35.733 02:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:36.301 02:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:36.301 02:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:36.560 [2024-07-11 02:36:26.811805] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:36.560 02:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:30:36.560 02:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.560 02:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:36.818 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:36.819 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:37.076 [2024-07-11 02:36:27.345003] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1110fc0 00:30:37.076 /dev/nbd0 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:37.076 1+0 records in 00:30:37.076 1+0 records out 00:30:37.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256592 s, 16.0 MB/s 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:37.076 02:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:30:38.010 7936+0 records in 00:30:38.010 7936+0 records out 00:30:38.010 32505856 bytes (33 MB, 31 MiB) copied, 0.758889 s, 42.8 MB/s 00:30:38.010 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:38.010 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:38.010 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:38.010 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:38.010 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:30:38.010 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:38.010 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:38.322 [2024-07-11 02:36:28.434995] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:30:38.322 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:38.322 [2024-07-11 02:36:28.679688] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.581 "name": "raid_bdev1", 00:30:38.581 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:38.581 "strip_size_kb": 0, 00:30:38.581 "state": "online", 00:30:38.581 "raid_level": "raid1", 00:30:38.581 "superblock": true, 00:30:38.581 "num_base_bdevs": 2, 00:30:38.581 "num_base_bdevs_discovered": 1, 00:30:38.581 "num_base_bdevs_operational": 1, 00:30:38.581 "base_bdevs_list": [ 00:30:38.581 { 00:30:38.581 "name": null, 00:30:38.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.581 "is_configured": false, 00:30:38.581 "data_offset": 256, 00:30:38.581 "data_size": 7936 00:30:38.581 }, 00:30:38.581 { 00:30:38.581 "name": "BaseBdev2", 00:30:38.581 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:38.581 "is_configured": true, 00:30:38.581 "data_offset": 256, 00:30:38.581 "data_size": 7936 00:30:38.581 } 00:30:38.581 ] 00:30:38.581 }' 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.581 02:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:39.516 02:36:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:39.516 [2024-07-11 02:36:29.810749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:39.516 [2024-07-11 02:36:29.815591] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12abb10 00:30:39.516 [2024-07-11 02:36:29.817861] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:39.516 02:36:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:40.452 02:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:40.452 02:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:40.452 02:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:40.452 02:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:40.452 02:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:40.452 02:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.452 02:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.711 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:40.711 "name": "raid_bdev1", 00:30:40.711 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:40.711 "strip_size_kb": 0, 00:30:40.711 "state": "online", 00:30:40.711 "raid_level": "raid1", 00:30:40.711 "superblock": true, 00:30:40.711 "num_base_bdevs": 2, 00:30:40.711 "num_base_bdevs_discovered": 2, 00:30:40.711 "num_base_bdevs_operational": 2, 00:30:40.711 "process": { 00:30:40.711 "type": "rebuild", 00:30:40.711 "target": "spare", 00:30:40.711 "progress": { 00:30:40.711 "blocks": 3072, 00:30:40.711 "percent": 38 00:30:40.711 } 00:30:40.711 }, 00:30:40.711 "base_bdevs_list": [ 00:30:40.711 { 00:30:40.711 "name": "spare", 00:30:40.711 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:40.712 "is_configured": true, 00:30:40.712 "data_offset": 256, 00:30:40.712 "data_size": 7936 00:30:40.712 }, 00:30:40.712 { 00:30:40.712 "name": "BaseBdev2", 00:30:40.712 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:40.712 "is_configured": true, 00:30:40.712 "data_offset": 256, 00:30:40.712 "data_size": 7936 00:30:40.712 } 00:30:40.712 ] 00:30:40.712 }' 00:30:40.712 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:40.970 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:40.970 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:40.970 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:40.970 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:41.229 [2024-07-11 02:36:31.429048] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:41.229 [2024-07-11 02:36:31.430343] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:41.229 [2024-07-11 02:36:31.430385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.229 [2024-07-11 02:36:31.430400] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:41.229 [2024-07-11 02:36:31.430408] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.229 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.487 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.487 "name": "raid_bdev1", 00:30:41.487 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:41.487 "strip_size_kb": 0, 00:30:41.487 "state": "online", 00:30:41.487 "raid_level": "raid1", 00:30:41.487 "superblock": true, 00:30:41.487 "num_base_bdevs": 2, 00:30:41.487 "num_base_bdevs_discovered": 1, 00:30:41.487 "num_base_bdevs_operational": 1, 00:30:41.487 "base_bdevs_list": [ 00:30:41.487 { 00:30:41.487 "name": null, 00:30:41.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.487 "is_configured": false, 00:30:41.487 "data_offset": 256, 00:30:41.487 "data_size": 7936 00:30:41.487 }, 00:30:41.487 { 00:30:41.487 "name": "BaseBdev2", 00:30:41.487 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:41.487 "is_configured": true, 00:30:41.487 "data_offset": 256, 00:30:41.487 "data_size": 7936 00:30:41.487 } 00:30:41.487 ] 00:30:41.487 }' 00:30:41.487 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.487 02:36:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:42.054 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:42.054 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:42.054 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:42.054 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:42.054 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:42.054 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.054 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.313 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.313 "name": "raid_bdev1", 00:30:42.313 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:42.313 "strip_size_kb": 0, 00:30:42.313 "state": "online", 00:30:42.313 "raid_level": "raid1", 00:30:42.313 "superblock": true, 00:30:42.313 "num_base_bdevs": 2, 00:30:42.313 "num_base_bdevs_discovered": 1, 00:30:42.313 "num_base_bdevs_operational": 1, 00:30:42.313 "base_bdevs_list": [ 00:30:42.313 { 00:30:42.313 "name": null, 00:30:42.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.313 "is_configured": false, 00:30:42.313 "data_offset": 256, 00:30:42.313 "data_size": 7936 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "name": "BaseBdev2", 00:30:42.313 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:42.313 "is_configured": true, 00:30:42.313 "data_offset": 256, 00:30:42.313 "data_size": 7936 00:30:42.313 } 00:30:42.313 ] 00:30:42.313 }' 00:30:42.313 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:42.313 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:42.313 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:42.313 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:42.313 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:42.572 [2024-07-11 02:36:32.922917] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:42.572 [2024-07-11 02:36:32.928383] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12abbc0 00:30:42.572 [2024-07-11 02:36:32.929868] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:42.572 02:36:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:43.947 02:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:43.947 02:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:43.947 02:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:43.947 02:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:43.947 02:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:43.947 02:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.947 02:36:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.947 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.947 "name": "raid_bdev1", 00:30:43.947 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:43.947 "strip_size_kb": 0, 00:30:43.947 "state": "online", 00:30:43.947 "raid_level": "raid1", 00:30:43.947 "superblock": true, 00:30:43.947 "num_base_bdevs": 2, 00:30:43.947 "num_base_bdevs_discovered": 2, 00:30:43.947 "num_base_bdevs_operational": 2, 00:30:43.947 "process": { 00:30:43.947 "type": "rebuild", 00:30:43.947 "target": "spare", 00:30:43.947 "progress": { 00:30:43.947 "blocks": 3072, 00:30:43.947 "percent": 38 00:30:43.947 } 00:30:43.947 }, 00:30:43.947 "base_bdevs_list": [ 00:30:43.947 { 00:30:43.947 "name": "spare", 00:30:43.948 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:43.948 "is_configured": true, 00:30:43.948 "data_offset": 256, 00:30:43.948 "data_size": 7936 00:30:43.948 }, 00:30:43.948 { 00:30:43.948 "name": "BaseBdev2", 00:30:43.948 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:43.948 "is_configured": true, 00:30:43.948 "data_offset": 256, 00:30:43.948 "data_size": 7936 00:30:43.948 } 00:30:43.948 ] 00:30:43.948 }' 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:43.948 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1047 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.948 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.206 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:44.206 "name": "raid_bdev1", 00:30:44.206 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:44.206 "strip_size_kb": 0, 00:30:44.206 "state": "online", 00:30:44.206 "raid_level": "raid1", 00:30:44.206 "superblock": true, 00:30:44.206 "num_base_bdevs": 2, 00:30:44.206 "num_base_bdevs_discovered": 2, 00:30:44.206 "num_base_bdevs_operational": 2, 00:30:44.206 "process": { 00:30:44.206 "type": "rebuild", 00:30:44.206 "target": "spare", 00:30:44.206 "progress": { 00:30:44.206 "blocks": 3840, 00:30:44.206 "percent": 48 00:30:44.206 } 00:30:44.206 }, 00:30:44.206 "base_bdevs_list": [ 00:30:44.206 { 00:30:44.206 "name": "spare", 00:30:44.206 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:44.206 "is_configured": true, 00:30:44.206 "data_offset": 256, 00:30:44.206 "data_size": 7936 00:30:44.206 }, 00:30:44.206 { 00:30:44.206 "name": "BaseBdev2", 00:30:44.206 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:44.206 "is_configured": true, 00:30:44.206 "data_offset": 256, 00:30:44.206 "data_size": 7936 00:30:44.206 } 00:30:44.206 ] 00:30:44.206 }' 00:30:44.206 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:44.206 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:44.206 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:44.464 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:44.464 02:36:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.402 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.662 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.662 "name": "raid_bdev1", 00:30:45.662 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:45.662 "strip_size_kb": 0, 00:30:45.662 "state": "online", 00:30:45.662 "raid_level": "raid1", 00:30:45.662 "superblock": true, 00:30:45.662 "num_base_bdevs": 2, 00:30:45.662 "num_base_bdevs_discovered": 2, 00:30:45.662 "num_base_bdevs_operational": 2, 00:30:45.662 "process": { 00:30:45.662 "type": "rebuild", 00:30:45.662 "target": "spare", 00:30:45.662 "progress": { 00:30:45.662 "blocks": 7424, 00:30:45.662 "percent": 93 00:30:45.662 } 00:30:45.662 }, 00:30:45.662 "base_bdevs_list": [ 00:30:45.662 { 00:30:45.662 "name": "spare", 00:30:45.662 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:45.662 "is_configured": true, 00:30:45.662 "data_offset": 256, 00:30:45.662 "data_size": 7936 00:30:45.662 }, 00:30:45.662 { 00:30:45.662 "name": "BaseBdev2", 00:30:45.662 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:45.662 "is_configured": true, 00:30:45.662 "data_offset": 256, 00:30:45.662 "data_size": 7936 00:30:45.662 } 00:30:45.662 ] 00:30:45.662 }' 00:30:45.662 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.662 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.662 02:36:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:45.662 02:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:45.662 02:36:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:45.662 [2024-07-11 02:36:36.054121] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:45.662 [2024-07-11 02:36:36.054185] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:45.662 [2024-07-11 02:36:36.054269] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.036 "name": "raid_bdev1", 00:30:47.036 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:47.036 "strip_size_kb": 0, 00:30:47.036 "state": "online", 00:30:47.036 "raid_level": "raid1", 00:30:47.036 "superblock": true, 00:30:47.036 "num_base_bdevs": 2, 00:30:47.036 "num_base_bdevs_discovered": 2, 00:30:47.036 "num_base_bdevs_operational": 2, 00:30:47.036 "base_bdevs_list": [ 00:30:47.036 { 00:30:47.036 "name": "spare", 00:30:47.036 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:47.036 "is_configured": true, 00:30:47.036 "data_offset": 256, 00:30:47.036 "data_size": 7936 00:30:47.036 }, 00:30:47.036 { 00:30:47.036 "name": "BaseBdev2", 00:30:47.036 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:47.036 "is_configured": true, 00:30:47.036 "data_offset": 256, 00:30:47.036 "data_size": 7936 00:30:47.036 } 00:30:47.036 ] 00:30:47.036 }' 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.036 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.294 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.294 "name": "raid_bdev1", 00:30:47.294 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:47.294 "strip_size_kb": 0, 00:30:47.294 "state": "online", 00:30:47.294 "raid_level": "raid1", 00:30:47.294 "superblock": true, 00:30:47.294 "num_base_bdevs": 2, 00:30:47.294 "num_base_bdevs_discovered": 2, 00:30:47.294 "num_base_bdevs_operational": 2, 00:30:47.294 "base_bdevs_list": [ 00:30:47.294 { 00:30:47.294 "name": "spare", 00:30:47.294 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:47.294 "is_configured": true, 00:30:47.294 "data_offset": 256, 00:30:47.294 "data_size": 7936 00:30:47.294 }, 00:30:47.294 { 00:30:47.294 "name": "BaseBdev2", 00:30:47.294 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:47.294 "is_configured": true, 00:30:47.294 "data_offset": 256, 00:30:47.294 "data_size": 7936 00:30:47.294 } 00:30:47.294 ] 00:30:47.294 }' 00:30:47.294 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.294 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:47.294 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.553 02:36:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.120 02:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.120 "name": "raid_bdev1", 00:30:48.120 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:48.120 "strip_size_kb": 0, 00:30:48.120 "state": "online", 00:30:48.120 "raid_level": "raid1", 00:30:48.120 "superblock": true, 00:30:48.120 "num_base_bdevs": 2, 00:30:48.120 "num_base_bdevs_discovered": 2, 00:30:48.120 "num_base_bdevs_operational": 2, 00:30:48.120 "base_bdevs_list": [ 00:30:48.120 { 00:30:48.120 "name": "spare", 00:30:48.120 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:48.120 "is_configured": true, 00:30:48.120 "data_offset": 256, 00:30:48.120 "data_size": 7936 00:30:48.120 }, 00:30:48.120 { 00:30:48.120 "name": "BaseBdev2", 00:30:48.120 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:48.120 "is_configured": true, 00:30:48.120 "data_offset": 256, 00:30:48.120 "data_size": 7936 00:30:48.120 } 00:30:48.120 ] 00:30:48.120 }' 00:30:48.120 02:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.120 02:36:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:48.686 02:36:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:48.687 [2024-07-11 02:36:39.078894] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:48.687 [2024-07-11 02:36:39.078924] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:48.687 [2024-07-11 02:36:39.078984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:48.687 [2024-07-11 02:36:39.079042] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:48.687 [2024-07-11 02:36:39.079054] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a8d30 name raid_bdev1, state offline 00:30:48.687 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.687 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:48.985 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:49.276 /dev/nbd0 00:30:49.276 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:49.276 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:49.277 1+0 records in 00:30:49.277 1+0 records out 00:30:49.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254991 s, 16.1 MB/s 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:49.277 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:49.536 /dev/nbd1 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:49.536 1+0 records in 00:30:49.536 1+0 records out 00:30:49.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324049 s, 12.6 MB/s 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:49.536 02:36:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:49.794 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:30:50.360 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:30:50.361 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:50.361 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:50.624 02:36:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:50.883 [2024-07-11 02:36:41.143806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:50.883 [2024-07-11 02:36:41.143855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:50.883 [2024-07-11 02:36:41.143876] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12aa100 00:30:50.883 [2024-07-11 02:36:41.143889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:50.883 [2024-07-11 02:36:41.145483] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:50.883 [2024-07-11 02:36:41.145512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:50.883 [2024-07-11 02:36:41.145596] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:50.883 [2024-07-11 02:36:41.145623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:50.883 [2024-07-11 02:36:41.145725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:50.883 spare 00:30:50.883 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:50.883 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:50.883 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:50.883 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:50.883 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:50.883 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:50.884 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:50.884 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:50.884 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:50.884 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:50.884 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.884 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.884 [2024-07-11 02:36:41.246052] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a24d0 00:30:50.884 [2024-07-11 02:36:41.246072] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:50.884 [2024-07-11 02:36:41.246275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a89c0 00:30:50.884 [2024-07-11 02:36:41.246432] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a24d0 00:30:50.884 [2024-07-11 02:36:41.246443] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a24d0 00:30:50.884 [2024-07-11 02:36:41.246551] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:51.142 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.142 "name": "raid_bdev1", 00:30:51.142 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:51.142 "strip_size_kb": 0, 00:30:51.142 "state": "online", 00:30:51.142 "raid_level": "raid1", 00:30:51.142 "superblock": true, 00:30:51.142 "num_base_bdevs": 2, 00:30:51.142 "num_base_bdevs_discovered": 2, 00:30:51.142 "num_base_bdevs_operational": 2, 00:30:51.142 "base_bdevs_list": [ 00:30:51.142 { 00:30:51.142 "name": "spare", 00:30:51.142 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:51.142 "is_configured": true, 00:30:51.142 "data_offset": 256, 00:30:51.142 "data_size": 7936 00:30:51.142 }, 00:30:51.142 { 00:30:51.142 "name": "BaseBdev2", 00:30:51.142 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:51.142 "is_configured": true, 00:30:51.142 "data_offset": 256, 00:30:51.142 "data_size": 7936 00:30:51.142 } 00:30:51.142 ] 00:30:51.142 }' 00:30:51.142 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.142 02:36:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:51.708 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:51.708 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:51.708 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:51.708 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:51.708 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:51.708 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.708 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.967 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:51.967 "name": "raid_bdev1", 00:30:51.967 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:51.967 "strip_size_kb": 0, 00:30:51.967 "state": "online", 00:30:51.967 "raid_level": "raid1", 00:30:51.967 "superblock": true, 00:30:51.967 "num_base_bdevs": 2, 00:30:51.967 "num_base_bdevs_discovered": 2, 00:30:51.967 "num_base_bdevs_operational": 2, 00:30:51.967 "base_bdevs_list": [ 00:30:51.967 { 00:30:51.967 "name": "spare", 00:30:51.967 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:51.967 "is_configured": true, 00:30:51.967 "data_offset": 256, 00:30:51.967 "data_size": 7936 00:30:51.967 }, 00:30:51.967 { 00:30:51.967 "name": "BaseBdev2", 00:30:51.967 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:51.967 "is_configured": true, 00:30:51.967 "data_offset": 256, 00:30:51.967 "data_size": 7936 00:30:51.967 } 00:30:51.967 ] 00:30:51.967 }' 00:30:51.967 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:51.967 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:51.967 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:51.967 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:51.967 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.967 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:52.226 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:52.226 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:52.485 [2024-07-11 02:36:42.760210] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.486 02:36:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.745 02:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:52.745 "name": "raid_bdev1", 00:30:52.745 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:52.745 "strip_size_kb": 0, 00:30:52.745 "state": "online", 00:30:52.745 "raid_level": "raid1", 00:30:52.745 "superblock": true, 00:30:52.745 "num_base_bdevs": 2, 00:30:52.745 "num_base_bdevs_discovered": 1, 00:30:52.745 "num_base_bdevs_operational": 1, 00:30:52.745 "base_bdevs_list": [ 00:30:52.745 { 00:30:52.745 "name": null, 00:30:52.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.745 "is_configured": false, 00:30:52.745 "data_offset": 256, 00:30:52.745 "data_size": 7936 00:30:52.745 }, 00:30:52.745 { 00:30:52.745 "name": "BaseBdev2", 00:30:52.745 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:52.745 "is_configured": true, 00:30:52.745 "data_offset": 256, 00:30:52.745 "data_size": 7936 00:30:52.745 } 00:30:52.745 ] 00:30:52.745 }' 00:30:52.745 02:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:52.745 02:36:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 02:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:53.584 [2024-07-11 02:36:43.883373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:53.584 [2024-07-11 02:36:43.883535] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:53.584 [2024-07-11 02:36:43.883553] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:53.584 [2024-07-11 02:36:43.883578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:53.584 [2024-07-11 02:36:43.888929] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10fb560 00:30:53.584 [2024-07-11 02:36:43.891250] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:53.584 02:36:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:54.520 02:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:54.520 02:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:54.520 02:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:54.520 02:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:54.520 02:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:54.520 02:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.520 02:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.778 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:54.778 "name": "raid_bdev1", 00:30:54.778 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:54.778 "strip_size_kb": 0, 00:30:54.778 "state": "online", 00:30:54.778 "raid_level": "raid1", 00:30:54.778 "superblock": true, 00:30:54.778 "num_base_bdevs": 2, 00:30:54.778 "num_base_bdevs_discovered": 2, 00:30:54.778 "num_base_bdevs_operational": 2, 00:30:54.778 "process": { 00:30:54.778 "type": "rebuild", 00:30:54.778 "target": "spare", 00:30:54.778 "progress": { 00:30:54.778 "blocks": 3072, 00:30:54.778 "percent": 38 00:30:54.778 } 00:30:54.778 }, 00:30:54.778 "base_bdevs_list": [ 00:30:54.778 { 00:30:54.778 "name": "spare", 00:30:54.778 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:54.778 "is_configured": true, 00:30:54.778 "data_offset": 256, 00:30:54.778 "data_size": 7936 00:30:54.778 }, 00:30:54.778 { 00:30:54.778 "name": "BaseBdev2", 00:30:54.778 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:54.778 "is_configured": true, 00:30:54.778 "data_offset": 256, 00:30:54.778 "data_size": 7936 00:30:54.778 } 00:30:54.778 ] 00:30:54.778 }' 00:30:54.778 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:55.037 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:55.037 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:55.037 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:55.037 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:55.037 [2024-07-11 02:36:45.453853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:55.295 [2024-07-11 02:36:45.504042] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:55.295 [2024-07-11 02:36:45.504088] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.295 [2024-07-11 02:36:45.504103] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:55.295 [2024-07-11 02:36:45.504112] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.295 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.553 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:55.553 "name": "raid_bdev1", 00:30:55.553 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:55.553 "strip_size_kb": 0, 00:30:55.553 "state": "online", 00:30:55.553 "raid_level": "raid1", 00:30:55.553 "superblock": true, 00:30:55.553 "num_base_bdevs": 2, 00:30:55.553 "num_base_bdevs_discovered": 1, 00:30:55.553 "num_base_bdevs_operational": 1, 00:30:55.553 "base_bdevs_list": [ 00:30:55.553 { 00:30:55.553 "name": null, 00:30:55.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.554 "is_configured": false, 00:30:55.554 "data_offset": 256, 00:30:55.554 "data_size": 7936 00:30:55.554 }, 00:30:55.554 { 00:30:55.554 "name": "BaseBdev2", 00:30:55.554 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:55.554 "is_configured": true, 00:30:55.554 "data_offset": 256, 00:30:55.554 "data_size": 7936 00:30:55.554 } 00:30:55.554 ] 00:30:55.554 }' 00:30:55.554 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:55.554 02:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:56.120 02:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:56.379 [2024-07-11 02:36:46.547414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:56.379 [2024-07-11 02:36:46.547466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.379 [2024-07-11 02:36:46.547488] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10fada0 00:30:56.379 [2024-07-11 02:36:46.547500] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.379 [2024-07-11 02:36:46.547889] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.379 [2024-07-11 02:36:46.547908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:56.379 [2024-07-11 02:36:46.547993] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:56.379 [2024-07-11 02:36:46.548006] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:56.379 [2024-07-11 02:36:46.548017] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:56.379 [2024-07-11 02:36:46.548034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:56.379 [2024-07-11 02:36:46.552753] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a9010 00:30:56.379 spare 00:30:56.379 [2024-07-11 02:36:46.554184] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:56.379 02:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:57.315 02:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:57.315 02:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:57.315 02:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:57.315 02:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:57.315 02:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:57.315 02:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.315 02:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.881 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:57.881 "name": "raid_bdev1", 00:30:57.881 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:57.881 "strip_size_kb": 0, 00:30:57.881 "state": "online", 00:30:57.881 "raid_level": "raid1", 00:30:57.881 "superblock": true, 00:30:57.881 "num_base_bdevs": 2, 00:30:57.881 "num_base_bdevs_discovered": 2, 00:30:57.881 "num_base_bdevs_operational": 2, 00:30:57.881 "process": { 00:30:57.881 "type": "rebuild", 00:30:57.881 "target": "spare", 00:30:57.881 "progress": { 00:30:57.881 "blocks": 3584, 00:30:57.881 "percent": 45 00:30:57.881 } 00:30:57.881 }, 00:30:57.881 "base_bdevs_list": [ 00:30:57.881 { 00:30:57.881 "name": "spare", 00:30:57.881 "uuid": "3e5afb20-a5bf-51b6-91b9-6c8c765215d1", 00:30:57.881 "is_configured": true, 00:30:57.881 "data_offset": 256, 00:30:57.881 "data_size": 7936 00:30:57.881 }, 00:30:57.881 { 00:30:57.881 "name": "BaseBdev2", 00:30:57.881 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:57.881 "is_configured": true, 00:30:57.881 "data_offset": 256, 00:30:57.881 "data_size": 7936 00:30:57.881 } 00:30:57.881 ] 00:30:57.881 }' 00:30:57.881 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:57.881 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:57.881 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:57.881 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:57.881 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:58.140 [2024-07-11 02:36:48.402700] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:58.140 [2024-07-11 02:36:48.468623] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:58.140 [2024-07-11 02:36:48.468673] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.140 [2024-07-11 02:36:48.468689] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:58.140 [2024-07-11 02:36:48.468698] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.140 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.398 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.398 "name": "raid_bdev1", 00:30:58.398 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:58.398 "strip_size_kb": 0, 00:30:58.398 "state": "online", 00:30:58.398 "raid_level": "raid1", 00:30:58.398 "superblock": true, 00:30:58.398 "num_base_bdevs": 2, 00:30:58.398 "num_base_bdevs_discovered": 1, 00:30:58.398 "num_base_bdevs_operational": 1, 00:30:58.398 "base_bdevs_list": [ 00:30:58.398 { 00:30:58.398 "name": null, 00:30:58.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.398 "is_configured": false, 00:30:58.398 "data_offset": 256, 00:30:58.398 "data_size": 7936 00:30:58.398 }, 00:30:58.398 { 00:30:58.398 "name": "BaseBdev2", 00:30:58.398 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:58.398 "is_configured": true, 00:30:58.398 "data_offset": 256, 00:30:58.398 "data_size": 7936 00:30:58.398 } 00:30:58.398 ] 00:30:58.398 }' 00:30:58.398 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.398 02:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:58.963 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:58.963 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:58.963 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:58.963 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:58.963 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:58.963 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.963 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.221 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:59.221 "name": "raid_bdev1", 00:30:59.221 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:30:59.221 "strip_size_kb": 0, 00:30:59.221 "state": "online", 00:30:59.221 "raid_level": "raid1", 00:30:59.221 "superblock": true, 00:30:59.221 "num_base_bdevs": 2, 00:30:59.221 "num_base_bdevs_discovered": 1, 00:30:59.221 "num_base_bdevs_operational": 1, 00:30:59.221 "base_bdevs_list": [ 00:30:59.221 { 00:30:59.221 "name": null, 00:30:59.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.221 "is_configured": false, 00:30:59.221 "data_offset": 256, 00:30:59.221 "data_size": 7936 00:30:59.221 }, 00:30:59.221 { 00:30:59.221 "name": "BaseBdev2", 00:30:59.221 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:30:59.221 "is_configured": true, 00:30:59.221 "data_offset": 256, 00:30:59.221 "data_size": 7936 00:30:59.221 } 00:30:59.221 ] 00:30:59.221 }' 00:30:59.221 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:59.479 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:59.479 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:59.479 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:59.479 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:59.738 02:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:59.996 [2024-07-11 02:36:50.178225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:59.996 [2024-07-11 02:36:50.178277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.996 [2024-07-11 02:36:50.178297] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ac220 00:30:59.996 [2024-07-11 02:36:50.178309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.996 [2024-07-11 02:36:50.178650] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.996 [2024-07-11 02:36:50.178668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:59.996 [2024-07-11 02:36:50.178735] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:59.996 [2024-07-11 02:36:50.178747] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:59.996 [2024-07-11 02:36:50.178767] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:59.996 BaseBdev1 00:30:59.996 02:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.927 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.186 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:01.186 "name": "raid_bdev1", 00:31:01.186 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:31:01.186 "strip_size_kb": 0, 00:31:01.186 "state": "online", 00:31:01.186 "raid_level": "raid1", 00:31:01.186 "superblock": true, 00:31:01.186 "num_base_bdevs": 2, 00:31:01.186 "num_base_bdevs_discovered": 1, 00:31:01.186 "num_base_bdevs_operational": 1, 00:31:01.186 "base_bdevs_list": [ 00:31:01.186 { 00:31:01.186 "name": null, 00:31:01.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.186 "is_configured": false, 00:31:01.186 "data_offset": 256, 00:31:01.186 "data_size": 7936 00:31:01.186 }, 00:31:01.186 { 00:31:01.186 "name": "BaseBdev2", 00:31:01.186 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:31:01.187 "is_configured": true, 00:31:01.187 "data_offset": 256, 00:31:01.187 "data_size": 7936 00:31:01.187 } 00:31:01.187 ] 00:31:01.187 }' 00:31:01.187 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:01.187 02:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:01.754 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:01.754 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:01.754 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:01.754 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:01.754 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:01.754 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.754 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:02.014 "name": "raid_bdev1", 00:31:02.014 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:31:02.014 "strip_size_kb": 0, 00:31:02.014 "state": "online", 00:31:02.014 "raid_level": "raid1", 00:31:02.014 "superblock": true, 00:31:02.014 "num_base_bdevs": 2, 00:31:02.014 "num_base_bdevs_discovered": 1, 00:31:02.014 "num_base_bdevs_operational": 1, 00:31:02.014 "base_bdevs_list": [ 00:31:02.014 { 00:31:02.014 "name": null, 00:31:02.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.014 "is_configured": false, 00:31:02.014 "data_offset": 256, 00:31:02.014 "data_size": 7936 00:31:02.014 }, 00:31:02.014 { 00:31:02.014 "name": "BaseBdev2", 00:31:02.014 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:31:02.014 "is_configured": true, 00:31:02.014 "data_offset": 256, 00:31:02.014 "data_size": 7936 00:31:02.014 } 00:31:02.014 ] 00:31:02.014 }' 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:02.014 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:02.272 [2024-07-11 02:36:52.648795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:02.273 [2024-07-11 02:36:52.648919] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:02.273 [2024-07-11 02:36:52.648935] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:02.273 request: 00:31:02.273 { 00:31:02.273 "base_bdev": "BaseBdev1", 00:31:02.273 "raid_bdev": "raid_bdev1", 00:31:02.273 "method": "bdev_raid_add_base_bdev", 00:31:02.273 "req_id": 1 00:31:02.273 } 00:31:02.273 Got JSON-RPC error response 00:31:02.273 response: 00:31:02.273 { 00:31:02.273 "code": -22, 00:31:02.273 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:02.273 } 00:31:02.273 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:31:02.273 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.273 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.273 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.273 02:36:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:03.663 "name": "raid_bdev1", 00:31:03.663 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:31:03.663 "strip_size_kb": 0, 00:31:03.663 "state": "online", 00:31:03.663 "raid_level": "raid1", 00:31:03.663 "superblock": true, 00:31:03.663 "num_base_bdevs": 2, 00:31:03.663 "num_base_bdevs_discovered": 1, 00:31:03.663 "num_base_bdevs_operational": 1, 00:31:03.663 "base_bdevs_list": [ 00:31:03.663 { 00:31:03.663 "name": null, 00:31:03.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.663 "is_configured": false, 00:31:03.663 "data_offset": 256, 00:31:03.663 "data_size": 7936 00:31:03.663 }, 00:31:03.663 { 00:31:03.663 "name": "BaseBdev2", 00:31:03.663 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:31:03.663 "is_configured": true, 00:31:03.663 "data_offset": 256, 00:31:03.663 "data_size": 7936 00:31:03.663 } 00:31:03.663 ] 00:31:03.663 }' 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:03.663 02:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:04.229 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:04.229 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:04.229 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:04.229 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:04.229 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:04.229 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.229 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.487 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:04.488 "name": "raid_bdev1", 00:31:04.488 "uuid": "3734a80c-cbf8-4d76-a910-8251a23585b8", 00:31:04.488 "strip_size_kb": 0, 00:31:04.488 "state": "online", 00:31:04.488 "raid_level": "raid1", 00:31:04.488 "superblock": true, 00:31:04.488 "num_base_bdevs": 2, 00:31:04.488 "num_base_bdevs_discovered": 1, 00:31:04.488 "num_base_bdevs_operational": 1, 00:31:04.488 "base_bdevs_list": [ 00:31:04.488 { 00:31:04.488 "name": null, 00:31:04.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.488 "is_configured": false, 00:31:04.488 "data_offset": 256, 00:31:04.488 "data_size": 7936 00:31:04.488 }, 00:31:04.488 { 00:31:04.488 "name": "BaseBdev2", 00:31:04.488 "uuid": "dc56e80d-5e0f-512e-a40f-bb14b3c8a12c", 00:31:04.488 "is_configured": true, 00:31:04.488 "data_offset": 256, 00:31:04.488 "data_size": 7936 00:31:04.488 } 00:31:04.488 ] 00:31:04.488 }' 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 2044157 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2044157 ']' 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2044157 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.488 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2044157 00:31:04.746 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:04.746 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:04.746 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2044157' 00:31:04.746 killing process with pid 2044157 00:31:04.746 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2044157 00:31:04.746 Received shutdown signal, test time was about 60.000000 seconds 00:31:04.746 00:31:04.746 Latency(us) 00:31:04.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.746 =================================================================================================================== 00:31:04.746 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:04.746 [2024-07-11 02:36:54.944853] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:04.746 [2024-07-11 02:36:54.944949] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:04.746 [2024-07-11 02:36:54.944995] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:04.746 [2024-07-11 02:36:54.945006] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a24d0 name raid_bdev1, state offline 00:31:04.746 02:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2044157 00:31:04.746 [2024-07-11 02:36:54.971692] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:04.746 02:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:31:04.746 00:31:04.746 real 0m32.428s 00:31:04.746 user 0m50.595s 00:31:04.746 sys 0m5.525s 00:31:04.746 02:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:04.746 02:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:04.746 ************************************ 00:31:04.746 END TEST raid_rebuild_test_sb_4k 00:31:04.746 ************************************ 00:31:05.004 02:36:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:05.004 02:36:55 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:31:05.004 02:36:55 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:31:05.004 02:36:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:05.004 02:36:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.005 02:36:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:05.005 ************************************ 00:31:05.005 START TEST raid_state_function_test_sb_md_separate 00:31:05.005 ************************************ 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=2048657 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2048657' 00:31:05.005 Process raid pid: 2048657 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 2048657 /var/tmp/spdk-raid.sock 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2048657 ']' 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:05.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.005 02:36:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:05.005 [2024-07-11 02:36:55.316342] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:31:05.005 [2024-07-11 02:36:55.316405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.264 [2024-07-11 02:36:55.454082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.264 [2024-07-11 02:36:55.503694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.264 [2024-07-11 02:36:55.568284] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:05.264 [2024-07-11 02:36:55.568321] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:05.832 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.832 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:31:05.832 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:06.091 [2024-07-11 02:36:56.474250] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:06.091 [2024-07-11 02:36:56.474288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:06.091 [2024-07-11 02:36:56.474299] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:06.091 [2024-07-11 02:36:56.474310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:06.091 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:06.350 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.350 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:06.350 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:06.350 "name": "Existed_Raid", 00:31:06.350 "uuid": "f13ef754-d163-4592-8115-960a579e883d", 00:31:06.350 "strip_size_kb": 0, 00:31:06.350 "state": "configuring", 00:31:06.350 "raid_level": "raid1", 00:31:06.350 "superblock": true, 00:31:06.350 "num_base_bdevs": 2, 00:31:06.350 "num_base_bdevs_discovered": 0, 00:31:06.350 "num_base_bdevs_operational": 2, 00:31:06.350 "base_bdevs_list": [ 00:31:06.350 { 00:31:06.350 "name": "BaseBdev1", 00:31:06.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.350 "is_configured": false, 00:31:06.350 "data_offset": 0, 00:31:06.350 "data_size": 0 00:31:06.350 }, 00:31:06.350 { 00:31:06.350 "name": "BaseBdev2", 00:31:06.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.350 "is_configured": false, 00:31:06.350 "data_offset": 0, 00:31:06.350 "data_size": 0 00:31:06.350 } 00:31:06.350 ] 00:31:06.350 }' 00:31:06.350 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:06.350 02:36:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:07.287 02:36:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:07.287 [2024-07-11 02:36:57.589058] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:07.287 [2024-07-11 02:36:57.589089] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf18710 name Existed_Raid, state configuring 00:31:07.287 02:36:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:07.546 [2024-07-11 02:36:57.825697] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:07.546 [2024-07-11 02:36:57.825725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:07.546 [2024-07-11 02:36:57.825734] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:07.546 [2024-07-11 02:36:57.825746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:07.546 02:36:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:31:07.806 [2024-07-11 02:36:58.076812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:07.806 BaseBdev1 00:31:07.806 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:07.806 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:07.806 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:07.806 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:31:07.806 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:07.806 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:07.806 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:08.066 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:08.326 [ 00:31:08.326 { 00:31:08.326 "name": "BaseBdev1", 00:31:08.326 "aliases": [ 00:31:08.326 "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8" 00:31:08.326 ], 00:31:08.326 "product_name": "Malloc disk", 00:31:08.326 "block_size": 4096, 00:31:08.326 "num_blocks": 8192, 00:31:08.326 "uuid": "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8", 00:31:08.326 "md_size": 32, 00:31:08.326 "md_interleave": false, 00:31:08.326 "dif_type": 0, 00:31:08.326 "assigned_rate_limits": { 00:31:08.326 "rw_ios_per_sec": 0, 00:31:08.326 "rw_mbytes_per_sec": 0, 00:31:08.326 "r_mbytes_per_sec": 0, 00:31:08.326 "w_mbytes_per_sec": 0 00:31:08.326 }, 00:31:08.326 "claimed": true, 00:31:08.326 "claim_type": "exclusive_write", 00:31:08.326 "zoned": false, 00:31:08.326 "supported_io_types": { 00:31:08.326 "read": true, 00:31:08.326 "write": true, 00:31:08.326 "unmap": true, 00:31:08.326 "flush": true, 00:31:08.326 "reset": true, 00:31:08.326 "nvme_admin": false, 00:31:08.326 "nvme_io": false, 00:31:08.326 "nvme_io_md": false, 00:31:08.326 "write_zeroes": true, 00:31:08.326 "zcopy": true, 00:31:08.326 "get_zone_info": false, 00:31:08.326 "zone_management": false, 00:31:08.326 "zone_append": false, 00:31:08.326 "compare": false, 00:31:08.326 "compare_and_write": false, 00:31:08.326 "abort": true, 00:31:08.326 "seek_hole": false, 00:31:08.326 "seek_data": false, 00:31:08.326 "copy": true, 00:31:08.326 "nvme_iov_md": false 00:31:08.326 }, 00:31:08.326 "memory_domains": [ 00:31:08.326 { 00:31:08.326 "dma_device_id": "system", 00:31:08.326 "dma_device_type": 1 00:31:08.326 }, 00:31:08.326 { 00:31:08.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:08.326 "dma_device_type": 2 00:31:08.326 } 00:31:08.326 ], 00:31:08.326 "driver_specific": {} 00:31:08.326 } 00:31:08.326 ] 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.326 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:08.585 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:08.585 "name": "Existed_Raid", 00:31:08.585 "uuid": "bad96c80-0d57-445d-8f3f-40aa5a9fb1b1", 00:31:08.585 "strip_size_kb": 0, 00:31:08.585 "state": "configuring", 00:31:08.585 "raid_level": "raid1", 00:31:08.585 "superblock": true, 00:31:08.585 "num_base_bdevs": 2, 00:31:08.585 "num_base_bdevs_discovered": 1, 00:31:08.585 "num_base_bdevs_operational": 2, 00:31:08.585 "base_bdevs_list": [ 00:31:08.585 { 00:31:08.585 "name": "BaseBdev1", 00:31:08.585 "uuid": "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8", 00:31:08.585 "is_configured": true, 00:31:08.585 "data_offset": 256, 00:31:08.585 "data_size": 7936 00:31:08.585 }, 00:31:08.585 { 00:31:08.585 "name": "BaseBdev2", 00:31:08.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.585 "is_configured": false, 00:31:08.585 "data_offset": 0, 00:31:08.585 "data_size": 0 00:31:08.585 } 00:31:08.585 ] 00:31:08.585 }' 00:31:08.585 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:08.585 02:36:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:09.153 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:09.412 [2024-07-11 02:36:59.665033] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:09.412 [2024-07-11 02:36:59.665074] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf18040 name Existed_Raid, state configuring 00:31:09.412 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:09.671 [2024-07-11 02:36:59.909715] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:09.671 [2024-07-11 02:36:59.911095] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:09.671 [2024-07-11 02:36:59.911126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.671 02:36:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.930 02:37:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:09.930 "name": "Existed_Raid", 00:31:09.930 "uuid": "3b71247e-ba6c-403e-b5e9-f06020ac87ac", 00:31:09.930 "strip_size_kb": 0, 00:31:09.930 "state": "configuring", 00:31:09.930 "raid_level": "raid1", 00:31:09.930 "superblock": true, 00:31:09.930 "num_base_bdevs": 2, 00:31:09.930 "num_base_bdevs_discovered": 1, 00:31:09.930 "num_base_bdevs_operational": 2, 00:31:09.930 "base_bdevs_list": [ 00:31:09.930 { 00:31:09.930 "name": "BaseBdev1", 00:31:09.930 "uuid": "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8", 00:31:09.930 "is_configured": true, 00:31:09.930 "data_offset": 256, 00:31:09.930 "data_size": 7936 00:31:09.930 }, 00:31:09.930 { 00:31:09.930 "name": "BaseBdev2", 00:31:09.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.930 "is_configured": false, 00:31:09.930 "data_offset": 0, 00:31:09.930 "data_size": 0 00:31:09.930 } 00:31:09.930 ] 00:31:09.930 }' 00:31:09.930 02:37:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:09.930 02:37:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:10.498 02:37:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:31:10.757 [2024-07-11 02:37:01.028743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:10.757 [2024-07-11 02:37:01.028895] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b4170 00:31:10.757 [2024-07-11 02:37:01.028910] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:10.757 [2024-07-11 02:37:01.028972] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10b4720 00:31:10.757 [2024-07-11 02:37:01.029068] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b4170 00:31:10.757 [2024-07-11 02:37:01.029078] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10b4170 00:31:10.757 [2024-07-11 02:37:01.029143] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:10.757 BaseBdev2 00:31:10.757 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:10.757 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:10.757 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:10.757 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:31:10.757 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:10.757 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:10.757 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:11.015 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:11.289 [ 00:31:11.289 { 00:31:11.289 "name": "BaseBdev2", 00:31:11.289 "aliases": [ 00:31:11.289 "d00ed681-016b-47c5-a2c2-a84ee6c8f134" 00:31:11.289 ], 00:31:11.289 "product_name": "Malloc disk", 00:31:11.289 "block_size": 4096, 00:31:11.289 "num_blocks": 8192, 00:31:11.289 "uuid": "d00ed681-016b-47c5-a2c2-a84ee6c8f134", 00:31:11.289 "md_size": 32, 00:31:11.289 "md_interleave": false, 00:31:11.289 "dif_type": 0, 00:31:11.289 "assigned_rate_limits": { 00:31:11.289 "rw_ios_per_sec": 0, 00:31:11.289 "rw_mbytes_per_sec": 0, 00:31:11.289 "r_mbytes_per_sec": 0, 00:31:11.289 "w_mbytes_per_sec": 0 00:31:11.289 }, 00:31:11.289 "claimed": true, 00:31:11.289 "claim_type": "exclusive_write", 00:31:11.289 "zoned": false, 00:31:11.289 "supported_io_types": { 00:31:11.289 "read": true, 00:31:11.289 "write": true, 00:31:11.289 "unmap": true, 00:31:11.289 "flush": true, 00:31:11.289 "reset": true, 00:31:11.289 "nvme_admin": false, 00:31:11.289 "nvme_io": false, 00:31:11.289 "nvme_io_md": false, 00:31:11.289 "write_zeroes": true, 00:31:11.289 "zcopy": true, 00:31:11.289 "get_zone_info": false, 00:31:11.289 "zone_management": false, 00:31:11.289 "zone_append": false, 00:31:11.289 "compare": false, 00:31:11.289 "compare_and_write": false, 00:31:11.289 "abort": true, 00:31:11.289 "seek_hole": false, 00:31:11.289 "seek_data": false, 00:31:11.289 "copy": true, 00:31:11.289 "nvme_iov_md": false 00:31:11.289 }, 00:31:11.289 "memory_domains": [ 00:31:11.289 { 00:31:11.289 "dma_device_id": "system", 00:31:11.289 "dma_device_type": 1 00:31:11.289 }, 00:31:11.289 { 00:31:11.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.289 "dma_device_type": 2 00:31:11.289 } 00:31:11.289 ], 00:31:11.289 "driver_specific": {} 00:31:11.289 } 00:31:11.289 ] 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.289 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:11.587 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.587 "name": "Existed_Raid", 00:31:11.587 "uuid": "3b71247e-ba6c-403e-b5e9-f06020ac87ac", 00:31:11.587 "strip_size_kb": 0, 00:31:11.587 "state": "online", 00:31:11.587 "raid_level": "raid1", 00:31:11.587 "superblock": true, 00:31:11.587 "num_base_bdevs": 2, 00:31:11.587 "num_base_bdevs_discovered": 2, 00:31:11.587 "num_base_bdevs_operational": 2, 00:31:11.587 "base_bdevs_list": [ 00:31:11.587 { 00:31:11.587 "name": "BaseBdev1", 00:31:11.587 "uuid": "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8", 00:31:11.587 "is_configured": true, 00:31:11.587 "data_offset": 256, 00:31:11.587 "data_size": 7936 00:31:11.587 }, 00:31:11.587 { 00:31:11.587 "name": "BaseBdev2", 00:31:11.587 "uuid": "d00ed681-016b-47c5-a2c2-a84ee6c8f134", 00:31:11.587 "is_configured": true, 00:31:11.587 "data_offset": 256, 00:31:11.587 "data_size": 7936 00:31:11.587 } 00:31:11.587 ] 00:31:11.587 }' 00:31:11.587 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.587 02:37:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:12.184 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:12.442 [2024-07-11 02:37:02.633325] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:12.442 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:12.442 "name": "Existed_Raid", 00:31:12.442 "aliases": [ 00:31:12.442 "3b71247e-ba6c-403e-b5e9-f06020ac87ac" 00:31:12.442 ], 00:31:12.442 "product_name": "Raid Volume", 00:31:12.442 "block_size": 4096, 00:31:12.442 "num_blocks": 7936, 00:31:12.442 "uuid": "3b71247e-ba6c-403e-b5e9-f06020ac87ac", 00:31:12.442 "md_size": 32, 00:31:12.442 "md_interleave": false, 00:31:12.442 "dif_type": 0, 00:31:12.442 "assigned_rate_limits": { 00:31:12.442 "rw_ios_per_sec": 0, 00:31:12.442 "rw_mbytes_per_sec": 0, 00:31:12.442 "r_mbytes_per_sec": 0, 00:31:12.442 "w_mbytes_per_sec": 0 00:31:12.442 }, 00:31:12.442 "claimed": false, 00:31:12.442 "zoned": false, 00:31:12.442 "supported_io_types": { 00:31:12.442 "read": true, 00:31:12.442 "write": true, 00:31:12.442 "unmap": false, 00:31:12.442 "flush": false, 00:31:12.442 "reset": true, 00:31:12.442 "nvme_admin": false, 00:31:12.442 "nvme_io": false, 00:31:12.442 "nvme_io_md": false, 00:31:12.442 "write_zeroes": true, 00:31:12.442 "zcopy": false, 00:31:12.442 "get_zone_info": false, 00:31:12.442 "zone_management": false, 00:31:12.442 "zone_append": false, 00:31:12.442 "compare": false, 00:31:12.442 "compare_and_write": false, 00:31:12.442 "abort": false, 00:31:12.442 "seek_hole": false, 00:31:12.442 "seek_data": false, 00:31:12.442 "copy": false, 00:31:12.442 "nvme_iov_md": false 00:31:12.442 }, 00:31:12.443 "memory_domains": [ 00:31:12.443 { 00:31:12.443 "dma_device_id": "system", 00:31:12.443 "dma_device_type": 1 00:31:12.443 }, 00:31:12.443 { 00:31:12.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:12.443 "dma_device_type": 2 00:31:12.443 }, 00:31:12.443 { 00:31:12.443 "dma_device_id": "system", 00:31:12.443 "dma_device_type": 1 00:31:12.443 }, 00:31:12.443 { 00:31:12.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:12.443 "dma_device_type": 2 00:31:12.443 } 00:31:12.443 ], 00:31:12.443 "driver_specific": { 00:31:12.443 "raid": { 00:31:12.443 "uuid": "3b71247e-ba6c-403e-b5e9-f06020ac87ac", 00:31:12.443 "strip_size_kb": 0, 00:31:12.443 "state": "online", 00:31:12.443 "raid_level": "raid1", 00:31:12.443 "superblock": true, 00:31:12.443 "num_base_bdevs": 2, 00:31:12.443 "num_base_bdevs_discovered": 2, 00:31:12.443 "num_base_bdevs_operational": 2, 00:31:12.443 "base_bdevs_list": [ 00:31:12.443 { 00:31:12.443 "name": "BaseBdev1", 00:31:12.443 "uuid": "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8", 00:31:12.443 "is_configured": true, 00:31:12.443 "data_offset": 256, 00:31:12.443 "data_size": 7936 00:31:12.443 }, 00:31:12.443 { 00:31:12.443 "name": "BaseBdev2", 00:31:12.443 "uuid": "d00ed681-016b-47c5-a2c2-a84ee6c8f134", 00:31:12.443 "is_configured": true, 00:31:12.443 "data_offset": 256, 00:31:12.443 "data_size": 7936 00:31:12.443 } 00:31:12.443 ] 00:31:12.443 } 00:31:12.443 } 00:31:12.443 }' 00:31:12.443 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:12.443 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:12.443 BaseBdev2' 00:31:12.443 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:12.443 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:12.443 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:12.702 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:12.702 "name": "BaseBdev1", 00:31:12.702 "aliases": [ 00:31:12.702 "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8" 00:31:12.702 ], 00:31:12.702 "product_name": "Malloc disk", 00:31:12.702 "block_size": 4096, 00:31:12.702 "num_blocks": 8192, 00:31:12.702 "uuid": "32a3825d-fa72-4353-ac7c-2a9f92f8f0a8", 00:31:12.702 "md_size": 32, 00:31:12.702 "md_interleave": false, 00:31:12.702 "dif_type": 0, 00:31:12.702 "assigned_rate_limits": { 00:31:12.702 "rw_ios_per_sec": 0, 00:31:12.702 "rw_mbytes_per_sec": 0, 00:31:12.702 "r_mbytes_per_sec": 0, 00:31:12.702 "w_mbytes_per_sec": 0 00:31:12.702 }, 00:31:12.702 "claimed": true, 00:31:12.702 "claim_type": "exclusive_write", 00:31:12.702 "zoned": false, 00:31:12.702 "supported_io_types": { 00:31:12.702 "read": true, 00:31:12.702 "write": true, 00:31:12.702 "unmap": true, 00:31:12.702 "flush": true, 00:31:12.702 "reset": true, 00:31:12.702 "nvme_admin": false, 00:31:12.702 "nvme_io": false, 00:31:12.702 "nvme_io_md": false, 00:31:12.702 "write_zeroes": true, 00:31:12.702 "zcopy": true, 00:31:12.702 "get_zone_info": false, 00:31:12.702 "zone_management": false, 00:31:12.702 "zone_append": false, 00:31:12.702 "compare": false, 00:31:12.702 "compare_and_write": false, 00:31:12.702 "abort": true, 00:31:12.702 "seek_hole": false, 00:31:12.702 "seek_data": false, 00:31:12.702 "copy": true, 00:31:12.702 "nvme_iov_md": false 00:31:12.702 }, 00:31:12.702 "memory_domains": [ 00:31:12.702 { 00:31:12.702 "dma_device_id": "system", 00:31:12.702 "dma_device_type": 1 00:31:12.702 }, 00:31:12.702 { 00:31:12.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:12.702 "dma_device_type": 2 00:31:12.702 } 00:31:12.702 ], 00:31:12.702 "driver_specific": {} 00:31:12.702 }' 00:31:12.702 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:12.702 02:37:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:12.702 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:12.702 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:12.702 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:12.702 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:12.702 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:12.962 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:13.221 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:13.221 "name": "BaseBdev2", 00:31:13.221 "aliases": [ 00:31:13.221 "d00ed681-016b-47c5-a2c2-a84ee6c8f134" 00:31:13.221 ], 00:31:13.221 "product_name": "Malloc disk", 00:31:13.221 "block_size": 4096, 00:31:13.221 "num_blocks": 8192, 00:31:13.221 "uuid": "d00ed681-016b-47c5-a2c2-a84ee6c8f134", 00:31:13.221 "md_size": 32, 00:31:13.221 "md_interleave": false, 00:31:13.221 "dif_type": 0, 00:31:13.221 "assigned_rate_limits": { 00:31:13.221 "rw_ios_per_sec": 0, 00:31:13.221 "rw_mbytes_per_sec": 0, 00:31:13.221 "r_mbytes_per_sec": 0, 00:31:13.221 "w_mbytes_per_sec": 0 00:31:13.221 }, 00:31:13.221 "claimed": true, 00:31:13.221 "claim_type": "exclusive_write", 00:31:13.221 "zoned": false, 00:31:13.221 "supported_io_types": { 00:31:13.221 "read": true, 00:31:13.221 "write": true, 00:31:13.221 "unmap": true, 00:31:13.221 "flush": true, 00:31:13.221 "reset": true, 00:31:13.221 "nvme_admin": false, 00:31:13.221 "nvme_io": false, 00:31:13.221 "nvme_io_md": false, 00:31:13.221 "write_zeroes": true, 00:31:13.221 "zcopy": true, 00:31:13.221 "get_zone_info": false, 00:31:13.221 "zone_management": false, 00:31:13.221 "zone_append": false, 00:31:13.221 "compare": false, 00:31:13.221 "compare_and_write": false, 00:31:13.221 "abort": true, 00:31:13.221 "seek_hole": false, 00:31:13.221 "seek_data": false, 00:31:13.221 "copy": true, 00:31:13.221 "nvme_iov_md": false 00:31:13.221 }, 00:31:13.221 "memory_domains": [ 00:31:13.221 { 00:31:13.221 "dma_device_id": "system", 00:31:13.221 "dma_device_type": 1 00:31:13.221 }, 00:31:13.221 { 00:31:13.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.221 "dma_device_type": 2 00:31:13.221 } 00:31:13.221 ], 00:31:13.221 "driver_specific": {} 00:31:13.221 }' 00:31:13.221 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:13.221 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:13.221 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:13.221 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:13.480 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:13.480 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:13.480 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:13.480 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:13.480 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:13.480 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:13.480 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:13.738 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:13.738 02:37:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:13.738 [2024-07-11 02:37:04.133078] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.997 "name": "Existed_Raid", 00:31:13.997 "uuid": "3b71247e-ba6c-403e-b5e9-f06020ac87ac", 00:31:13.997 "strip_size_kb": 0, 00:31:13.997 "state": "online", 00:31:13.997 "raid_level": "raid1", 00:31:13.997 "superblock": true, 00:31:13.997 "num_base_bdevs": 2, 00:31:13.997 "num_base_bdevs_discovered": 1, 00:31:13.997 "num_base_bdevs_operational": 1, 00:31:13.997 "base_bdevs_list": [ 00:31:13.997 { 00:31:13.997 "name": null, 00:31:13.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.997 "is_configured": false, 00:31:13.997 "data_offset": 256, 00:31:13.997 "data_size": 7936 00:31:13.997 }, 00:31:13.997 { 00:31:13.997 "name": "BaseBdev2", 00:31:13.997 "uuid": "d00ed681-016b-47c5-a2c2-a84ee6c8f134", 00:31:13.997 "is_configured": true, 00:31:13.997 "data_offset": 256, 00:31:13.997 "data_size": 7936 00:31:13.997 } 00:31:13.997 ] 00:31:13.997 }' 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.997 02:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:14.934 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:14.934 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:14.934 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.934 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:14.934 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:14.934 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:14.934 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:14.934 [2024-07-11 02:37:05.338961] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:14.934 [2024-07-11 02:37:05.339048] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:14.934 [2024-07-11 02:37:05.350582] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:14.934 [2024-07-11 02:37:05.350614] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:14.934 [2024-07-11 02:37:05.350625] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b4170 name Existed_Raid, state offline 00:31:15.193 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:15.193 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:15.193 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.193 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 2048657 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2048657 ']' 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2048657 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2048657 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2048657' 00:31:15.452 killing process with pid 2048657 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2048657 00:31:15.452 [2024-07-11 02:37:05.676436] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:15.452 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2048657 00:31:15.452 [2024-07-11 02:37:05.677315] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:15.712 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:31:15.712 00:31:15.712 real 0m10.630s 00:31:15.712 user 0m18.783s 00:31:15.712 sys 0m2.128s 00:31:15.712 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:15.712 02:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:15.712 ************************************ 00:31:15.712 END TEST raid_state_function_test_sb_md_separate 00:31:15.712 ************************************ 00:31:15.712 02:37:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:15.712 02:37:05 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:31:15.712 02:37:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:31:15.712 02:37:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.712 02:37:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:15.712 ************************************ 00:31:15.712 START TEST raid_superblock_test_md_separate 00:31:15.712 ************************************ 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=2050284 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 2050284 /var/tmp/spdk-raid.sock 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2050284 ']' 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:15.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:15.712 02:37:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:15.712 [2024-07-11 02:37:06.029942] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:31:15.712 [2024-07-11 02:37:06.030019] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050284 ] 00:31:15.972 [2024-07-11 02:37:06.167443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.972 [2024-07-11 02:37:06.216643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.972 [2024-07-11 02:37:06.278019] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:15.972 [2024-07-11 02:37:06.278044] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:16.540 02:37:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:31:17.108 malloc1 00:31:17.108 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:17.675 [2024-07-11 02:37:07.948630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:17.675 [2024-07-11 02:37:07.948688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.675 [2024-07-11 02:37:07.948712] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ebd230 00:31:17.675 [2024-07-11 02:37:07.948725] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.675 [2024-07-11 02:37:07.950261] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.675 [2024-07-11 02:37:07.950292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:17.675 pt1 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:17.675 02:37:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:31:18.241 malloc2 00:31:18.241 02:37:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:18.807 [2024-07-11 02:37:08.985821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:18.807 [2024-07-11 02:37:08.985871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:18.807 [2024-07-11 02:37:08.985890] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e77780 00:31:18.807 [2024-07-11 02:37:08.985902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:18.807 [2024-07-11 02:37:08.987263] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:18.807 [2024-07-11 02:37:08.987291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:18.807 pt2 00:31:18.807 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:18.807 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:18.807 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:19.373 [2024-07-11 02:37:09.503190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:19.373 [2024-07-11 02:37:09.504538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:19.373 [2024-07-11 02:37:09.504693] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e57700 00:31:19.373 [2024-07-11 02:37:09.504708] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:19.373 [2024-07-11 02:37:09.504800] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d1fc20 00:31:19.373 [2024-07-11 02:37:09.504917] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e57700 00:31:19.373 [2024-07-11 02:37:09.504927] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e57700 00:31:19.373 [2024-07-11 02:37:09.505005] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.373 "name": "raid_bdev1", 00:31:19.373 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:19.373 "strip_size_kb": 0, 00:31:19.373 "state": "online", 00:31:19.373 "raid_level": "raid1", 00:31:19.373 "superblock": true, 00:31:19.373 "num_base_bdevs": 2, 00:31:19.373 "num_base_bdevs_discovered": 2, 00:31:19.373 "num_base_bdevs_operational": 2, 00:31:19.373 "base_bdevs_list": [ 00:31:19.373 { 00:31:19.373 "name": "pt1", 00:31:19.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:19.373 "is_configured": true, 00:31:19.373 "data_offset": 256, 00:31:19.373 "data_size": 7936 00:31:19.373 }, 00:31:19.373 { 00:31:19.373 "name": "pt2", 00:31:19.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:19.373 "is_configured": true, 00:31:19.373 "data_offset": 256, 00:31:19.373 "data_size": 7936 00:31:19.373 } 00:31:19.373 ] 00:31:19.373 }' 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.373 02:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:20.096 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:20.354 [2024-07-11 02:37:10.610366] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:20.354 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:20.354 "name": "raid_bdev1", 00:31:20.354 "aliases": [ 00:31:20.354 "aad22d5e-446d-4aa1-acfc-702783419334" 00:31:20.354 ], 00:31:20.354 "product_name": "Raid Volume", 00:31:20.354 "block_size": 4096, 00:31:20.354 "num_blocks": 7936, 00:31:20.354 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:20.354 "md_size": 32, 00:31:20.354 "md_interleave": false, 00:31:20.354 "dif_type": 0, 00:31:20.354 "assigned_rate_limits": { 00:31:20.354 "rw_ios_per_sec": 0, 00:31:20.354 "rw_mbytes_per_sec": 0, 00:31:20.354 "r_mbytes_per_sec": 0, 00:31:20.354 "w_mbytes_per_sec": 0 00:31:20.354 }, 00:31:20.354 "claimed": false, 00:31:20.354 "zoned": false, 00:31:20.354 "supported_io_types": { 00:31:20.354 "read": true, 00:31:20.354 "write": true, 00:31:20.354 "unmap": false, 00:31:20.354 "flush": false, 00:31:20.354 "reset": true, 00:31:20.354 "nvme_admin": false, 00:31:20.354 "nvme_io": false, 00:31:20.354 "nvme_io_md": false, 00:31:20.354 "write_zeroes": true, 00:31:20.354 "zcopy": false, 00:31:20.354 "get_zone_info": false, 00:31:20.354 "zone_management": false, 00:31:20.354 "zone_append": false, 00:31:20.354 "compare": false, 00:31:20.354 "compare_and_write": false, 00:31:20.354 "abort": false, 00:31:20.354 "seek_hole": false, 00:31:20.354 "seek_data": false, 00:31:20.354 "copy": false, 00:31:20.354 "nvme_iov_md": false 00:31:20.354 }, 00:31:20.354 "memory_domains": [ 00:31:20.354 { 00:31:20.354 "dma_device_id": "system", 00:31:20.354 "dma_device_type": 1 00:31:20.354 }, 00:31:20.354 { 00:31:20.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.354 "dma_device_type": 2 00:31:20.354 }, 00:31:20.354 { 00:31:20.354 "dma_device_id": "system", 00:31:20.354 "dma_device_type": 1 00:31:20.354 }, 00:31:20.354 { 00:31:20.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.354 "dma_device_type": 2 00:31:20.354 } 00:31:20.354 ], 00:31:20.354 "driver_specific": { 00:31:20.354 "raid": { 00:31:20.354 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:20.354 "strip_size_kb": 0, 00:31:20.354 "state": "online", 00:31:20.354 "raid_level": "raid1", 00:31:20.354 "superblock": true, 00:31:20.354 "num_base_bdevs": 2, 00:31:20.354 "num_base_bdevs_discovered": 2, 00:31:20.354 "num_base_bdevs_operational": 2, 00:31:20.354 "base_bdevs_list": [ 00:31:20.354 { 00:31:20.354 "name": "pt1", 00:31:20.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:20.354 "is_configured": true, 00:31:20.354 "data_offset": 256, 00:31:20.354 "data_size": 7936 00:31:20.354 }, 00:31:20.354 { 00:31:20.354 "name": "pt2", 00:31:20.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:20.354 "is_configured": true, 00:31:20.354 "data_offset": 256, 00:31:20.354 "data_size": 7936 00:31:20.354 } 00:31:20.354 ] 00:31:20.354 } 00:31:20.354 } 00:31:20.354 }' 00:31:20.354 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:20.354 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:20.354 pt2' 00:31:20.354 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:20.354 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:20.354 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:20.613 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:20.613 "name": "pt1", 00:31:20.613 "aliases": [ 00:31:20.613 "00000000-0000-0000-0000-000000000001" 00:31:20.613 ], 00:31:20.613 "product_name": "passthru", 00:31:20.613 "block_size": 4096, 00:31:20.613 "num_blocks": 8192, 00:31:20.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:20.613 "md_size": 32, 00:31:20.613 "md_interleave": false, 00:31:20.613 "dif_type": 0, 00:31:20.613 "assigned_rate_limits": { 00:31:20.613 "rw_ios_per_sec": 0, 00:31:20.613 "rw_mbytes_per_sec": 0, 00:31:20.613 "r_mbytes_per_sec": 0, 00:31:20.613 "w_mbytes_per_sec": 0 00:31:20.613 }, 00:31:20.613 "claimed": true, 00:31:20.613 "claim_type": "exclusive_write", 00:31:20.613 "zoned": false, 00:31:20.613 "supported_io_types": { 00:31:20.613 "read": true, 00:31:20.613 "write": true, 00:31:20.613 "unmap": true, 00:31:20.613 "flush": true, 00:31:20.613 "reset": true, 00:31:20.613 "nvme_admin": false, 00:31:20.613 "nvme_io": false, 00:31:20.613 "nvme_io_md": false, 00:31:20.613 "write_zeroes": true, 00:31:20.613 "zcopy": true, 00:31:20.613 "get_zone_info": false, 00:31:20.613 "zone_management": false, 00:31:20.613 "zone_append": false, 00:31:20.613 "compare": false, 00:31:20.613 "compare_and_write": false, 00:31:20.613 "abort": true, 00:31:20.613 "seek_hole": false, 00:31:20.613 "seek_data": false, 00:31:20.613 "copy": true, 00:31:20.613 "nvme_iov_md": false 00:31:20.613 }, 00:31:20.613 "memory_domains": [ 00:31:20.613 { 00:31:20.613 "dma_device_id": "system", 00:31:20.613 "dma_device_type": 1 00:31:20.613 }, 00:31:20.613 { 00:31:20.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.613 "dma_device_type": 2 00:31:20.613 } 00:31:20.613 ], 00:31:20.613 "driver_specific": { 00:31:20.613 "passthru": { 00:31:20.613 "name": "pt1", 00:31:20.613 "base_bdev_name": "malloc1" 00:31:20.613 } 00:31:20.613 } 00:31:20.613 }' 00:31:20.613 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.613 02:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.613 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:20.613 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:20.872 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:21.130 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:21.130 "name": "pt2", 00:31:21.130 "aliases": [ 00:31:21.130 "00000000-0000-0000-0000-000000000002" 00:31:21.130 ], 00:31:21.130 "product_name": "passthru", 00:31:21.130 "block_size": 4096, 00:31:21.130 "num_blocks": 8192, 00:31:21.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:21.130 "md_size": 32, 00:31:21.130 "md_interleave": false, 00:31:21.130 "dif_type": 0, 00:31:21.130 "assigned_rate_limits": { 00:31:21.130 "rw_ios_per_sec": 0, 00:31:21.130 "rw_mbytes_per_sec": 0, 00:31:21.130 "r_mbytes_per_sec": 0, 00:31:21.130 "w_mbytes_per_sec": 0 00:31:21.130 }, 00:31:21.130 "claimed": true, 00:31:21.130 "claim_type": "exclusive_write", 00:31:21.130 "zoned": false, 00:31:21.130 "supported_io_types": { 00:31:21.130 "read": true, 00:31:21.130 "write": true, 00:31:21.130 "unmap": true, 00:31:21.130 "flush": true, 00:31:21.130 "reset": true, 00:31:21.130 "nvme_admin": false, 00:31:21.130 "nvme_io": false, 00:31:21.130 "nvme_io_md": false, 00:31:21.130 "write_zeroes": true, 00:31:21.130 "zcopy": true, 00:31:21.130 "get_zone_info": false, 00:31:21.130 "zone_management": false, 00:31:21.130 "zone_append": false, 00:31:21.130 "compare": false, 00:31:21.130 "compare_and_write": false, 00:31:21.130 "abort": true, 00:31:21.130 "seek_hole": false, 00:31:21.130 "seek_data": false, 00:31:21.130 "copy": true, 00:31:21.130 "nvme_iov_md": false 00:31:21.130 }, 00:31:21.130 "memory_domains": [ 00:31:21.130 { 00:31:21.130 "dma_device_id": "system", 00:31:21.130 "dma_device_type": 1 00:31:21.130 }, 00:31:21.130 { 00:31:21.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:21.130 "dma_device_type": 2 00:31:21.130 } 00:31:21.130 ], 00:31:21.130 "driver_specific": { 00:31:21.130 "passthru": { 00:31:21.130 "name": "pt2", 00:31:21.130 "base_bdev_name": "malloc2" 00:31:21.130 } 00:31:21.130 } 00:31:21.130 }' 00:31:21.130 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:21.388 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.646 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.646 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:21.646 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:21.646 02:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:31:21.904 [2024-07-11 02:37:12.106344] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:21.904 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=aad22d5e-446d-4aa1-acfc-702783419334 00:31:21.904 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z aad22d5e-446d-4aa1-acfc-702783419334 ']' 00:31:21.904 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:22.161 [2024-07-11 02:37:12.354724] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:22.161 [2024-07-11 02:37:12.354751] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:22.161 [2024-07-11 02:37:12.354823] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:22.161 [2024-07-11 02:37:12.354878] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:22.161 [2024-07-11 02:37:12.354890] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e57700 name raid_bdev1, state offline 00:31:22.161 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.161 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:31:22.420 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:31:22.420 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:31:22.420 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:22.420 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:22.678 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:22.678 02:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:22.678 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:22.678 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:22.937 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:23.196 [2024-07-11 02:37:13.557854] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:23.196 [2024-07-11 02:37:13.559166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:23.196 [2024-07-11 02:37:13.559222] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:23.196 [2024-07-11 02:37:13.559263] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:23.196 [2024-07-11 02:37:13.559282] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:23.196 [2024-07-11 02:37:13.559292] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d20630 name raid_bdev1, state configuring 00:31:23.196 request: 00:31:23.196 { 00:31:23.196 "name": "raid_bdev1", 00:31:23.196 "raid_level": "raid1", 00:31:23.196 "base_bdevs": [ 00:31:23.196 "malloc1", 00:31:23.196 "malloc2" 00:31:23.196 ], 00:31:23.196 "superblock": false, 00:31:23.196 "method": "bdev_raid_create", 00:31:23.196 "req_id": 1 00:31:23.196 } 00:31:23.196 Got JSON-RPC error response 00:31:23.196 response: 00:31:23.196 { 00:31:23.196 "code": -17, 00:31:23.196 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:23.196 } 00:31:23.196 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:31:23.196 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:23.196 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:23.196 02:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:23.196 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.196 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:31:23.454 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:31:23.454 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:31:23.454 02:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:23.713 [2024-07-11 02:37:14.051089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:23.713 [2024-07-11 02:37:14.051139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:23.713 [2024-07-11 02:37:14.051157] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e78860 00:31:23.713 [2024-07-11 02:37:14.051169] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:23.713 [2024-07-11 02:37:14.052631] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:23.713 [2024-07-11 02:37:14.052660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:23.713 [2024-07-11 02:37:14.052708] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:23.713 [2024-07-11 02:37:14.052735] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:23.713 pt1 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.713 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.973 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:23.973 "name": "raid_bdev1", 00:31:23.973 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:23.973 "strip_size_kb": 0, 00:31:23.973 "state": "configuring", 00:31:23.973 "raid_level": "raid1", 00:31:23.973 "superblock": true, 00:31:23.973 "num_base_bdevs": 2, 00:31:23.973 "num_base_bdevs_discovered": 1, 00:31:23.973 "num_base_bdevs_operational": 2, 00:31:23.973 "base_bdevs_list": [ 00:31:23.973 { 00:31:23.973 "name": "pt1", 00:31:23.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:23.973 "is_configured": true, 00:31:23.973 "data_offset": 256, 00:31:23.973 "data_size": 7936 00:31:23.973 }, 00:31:23.973 { 00:31:23.973 "name": null, 00:31:23.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:23.973 "is_configured": false, 00:31:23.973 "data_offset": 256, 00:31:23.973 "data_size": 7936 00:31:23.973 } 00:31:23.973 ] 00:31:23.973 }' 00:31:23.973 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:23.973 02:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:24.541 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:31:24.541 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:31:24.541 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:24.541 02:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:24.800 [2024-07-11 02:37:15.182112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:24.800 [2024-07-11 02:37:15.182167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.800 [2024-07-11 02:37:15.182187] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e59370 00:31:24.800 [2024-07-11 02:37:15.182199] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.800 [2024-07-11 02:37:15.182394] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.800 [2024-07-11 02:37:15.182411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:24.800 [2024-07-11 02:37:15.182459] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:24.800 [2024-07-11 02:37:15.182478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:24.800 [2024-07-11 02:37:15.182570] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e59a90 00:31:24.800 [2024-07-11 02:37:15.182580] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:24.800 [2024-07-11 02:37:15.182638] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e5b4f0 00:31:24.800 [2024-07-11 02:37:15.182737] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e59a90 00:31:24.800 [2024-07-11 02:37:15.182747] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e59a90 00:31:24.800 [2024-07-11 02:37:15.182831] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:24.800 pt2 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.800 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.059 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:25.059 "name": "raid_bdev1", 00:31:25.059 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:25.059 "strip_size_kb": 0, 00:31:25.059 "state": "online", 00:31:25.059 "raid_level": "raid1", 00:31:25.059 "superblock": true, 00:31:25.059 "num_base_bdevs": 2, 00:31:25.059 "num_base_bdevs_discovered": 2, 00:31:25.059 "num_base_bdevs_operational": 2, 00:31:25.059 "base_bdevs_list": [ 00:31:25.059 { 00:31:25.059 "name": "pt1", 00:31:25.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:25.059 "is_configured": true, 00:31:25.059 "data_offset": 256, 00:31:25.059 "data_size": 7936 00:31:25.059 }, 00:31:25.059 { 00:31:25.059 "name": "pt2", 00:31:25.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:25.059 "is_configured": true, 00:31:25.059 "data_offset": 256, 00:31:25.059 "data_size": 7936 00:31:25.059 } 00:31:25.059 ] 00:31:25.059 }' 00:31:25.059 02:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:25.059 02:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:25.997 [2024-07-11 02:37:16.301350] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:25.997 "name": "raid_bdev1", 00:31:25.997 "aliases": [ 00:31:25.997 "aad22d5e-446d-4aa1-acfc-702783419334" 00:31:25.997 ], 00:31:25.997 "product_name": "Raid Volume", 00:31:25.997 "block_size": 4096, 00:31:25.997 "num_blocks": 7936, 00:31:25.997 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:25.997 "md_size": 32, 00:31:25.997 "md_interleave": false, 00:31:25.997 "dif_type": 0, 00:31:25.997 "assigned_rate_limits": { 00:31:25.997 "rw_ios_per_sec": 0, 00:31:25.997 "rw_mbytes_per_sec": 0, 00:31:25.997 "r_mbytes_per_sec": 0, 00:31:25.997 "w_mbytes_per_sec": 0 00:31:25.997 }, 00:31:25.997 "claimed": false, 00:31:25.997 "zoned": false, 00:31:25.997 "supported_io_types": { 00:31:25.997 "read": true, 00:31:25.997 "write": true, 00:31:25.997 "unmap": false, 00:31:25.997 "flush": false, 00:31:25.997 "reset": true, 00:31:25.997 "nvme_admin": false, 00:31:25.997 "nvme_io": false, 00:31:25.997 "nvme_io_md": false, 00:31:25.997 "write_zeroes": true, 00:31:25.997 "zcopy": false, 00:31:25.997 "get_zone_info": false, 00:31:25.997 "zone_management": false, 00:31:25.997 "zone_append": false, 00:31:25.997 "compare": false, 00:31:25.997 "compare_and_write": false, 00:31:25.997 "abort": false, 00:31:25.997 "seek_hole": false, 00:31:25.997 "seek_data": false, 00:31:25.997 "copy": false, 00:31:25.997 "nvme_iov_md": false 00:31:25.997 }, 00:31:25.997 "memory_domains": [ 00:31:25.997 { 00:31:25.997 "dma_device_id": "system", 00:31:25.997 "dma_device_type": 1 00:31:25.997 }, 00:31:25.997 { 00:31:25.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.997 "dma_device_type": 2 00:31:25.997 }, 00:31:25.997 { 00:31:25.997 "dma_device_id": "system", 00:31:25.997 "dma_device_type": 1 00:31:25.997 }, 00:31:25.997 { 00:31:25.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.997 "dma_device_type": 2 00:31:25.997 } 00:31:25.997 ], 00:31:25.997 "driver_specific": { 00:31:25.997 "raid": { 00:31:25.997 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:25.997 "strip_size_kb": 0, 00:31:25.997 "state": "online", 00:31:25.997 "raid_level": "raid1", 00:31:25.997 "superblock": true, 00:31:25.997 "num_base_bdevs": 2, 00:31:25.997 "num_base_bdevs_discovered": 2, 00:31:25.997 "num_base_bdevs_operational": 2, 00:31:25.997 "base_bdevs_list": [ 00:31:25.997 { 00:31:25.997 "name": "pt1", 00:31:25.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:25.997 "is_configured": true, 00:31:25.997 "data_offset": 256, 00:31:25.997 "data_size": 7936 00:31:25.997 }, 00:31:25.997 { 00:31:25.997 "name": "pt2", 00:31:25.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:25.997 "is_configured": true, 00:31:25.997 "data_offset": 256, 00:31:25.997 "data_size": 7936 00:31:25.997 } 00:31:25.997 ] 00:31:25.997 } 00:31:25.997 } 00:31:25.997 }' 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:25.997 pt2' 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:25.997 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:26.256 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:26.256 "name": "pt1", 00:31:26.256 "aliases": [ 00:31:26.256 "00000000-0000-0000-0000-000000000001" 00:31:26.256 ], 00:31:26.256 "product_name": "passthru", 00:31:26.256 "block_size": 4096, 00:31:26.256 "num_blocks": 8192, 00:31:26.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:26.256 "md_size": 32, 00:31:26.256 "md_interleave": false, 00:31:26.256 "dif_type": 0, 00:31:26.256 "assigned_rate_limits": { 00:31:26.256 "rw_ios_per_sec": 0, 00:31:26.256 "rw_mbytes_per_sec": 0, 00:31:26.256 "r_mbytes_per_sec": 0, 00:31:26.256 "w_mbytes_per_sec": 0 00:31:26.256 }, 00:31:26.256 "claimed": true, 00:31:26.256 "claim_type": "exclusive_write", 00:31:26.256 "zoned": false, 00:31:26.256 "supported_io_types": { 00:31:26.256 "read": true, 00:31:26.256 "write": true, 00:31:26.256 "unmap": true, 00:31:26.256 "flush": true, 00:31:26.256 "reset": true, 00:31:26.256 "nvme_admin": false, 00:31:26.256 "nvme_io": false, 00:31:26.256 "nvme_io_md": false, 00:31:26.256 "write_zeroes": true, 00:31:26.257 "zcopy": true, 00:31:26.257 "get_zone_info": false, 00:31:26.257 "zone_management": false, 00:31:26.257 "zone_append": false, 00:31:26.257 "compare": false, 00:31:26.257 "compare_and_write": false, 00:31:26.257 "abort": true, 00:31:26.257 "seek_hole": false, 00:31:26.257 "seek_data": false, 00:31:26.257 "copy": true, 00:31:26.257 "nvme_iov_md": false 00:31:26.257 }, 00:31:26.257 "memory_domains": [ 00:31:26.257 { 00:31:26.257 "dma_device_id": "system", 00:31:26.257 "dma_device_type": 1 00:31:26.257 }, 00:31:26.257 { 00:31:26.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:26.257 "dma_device_type": 2 00:31:26.257 } 00:31:26.257 ], 00:31:26.257 "driver_specific": { 00:31:26.257 "passthru": { 00:31:26.257 "name": "pt1", 00:31:26.257 "base_bdev_name": "malloc1" 00:31:26.257 } 00:31:26.257 } 00:31:26.257 }' 00:31:26.257 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:26.257 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:26.516 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:26.807 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:26.807 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:26.807 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:26.807 02:37:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:26.807 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:26.807 "name": "pt2", 00:31:26.807 "aliases": [ 00:31:26.807 "00000000-0000-0000-0000-000000000002" 00:31:26.807 ], 00:31:26.807 "product_name": "passthru", 00:31:26.807 "block_size": 4096, 00:31:26.807 "num_blocks": 8192, 00:31:26.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:26.807 "md_size": 32, 00:31:26.807 "md_interleave": false, 00:31:26.807 "dif_type": 0, 00:31:26.807 "assigned_rate_limits": { 00:31:26.807 "rw_ios_per_sec": 0, 00:31:26.807 "rw_mbytes_per_sec": 0, 00:31:26.807 "r_mbytes_per_sec": 0, 00:31:26.807 "w_mbytes_per_sec": 0 00:31:26.807 }, 00:31:26.807 "claimed": true, 00:31:26.807 "claim_type": "exclusive_write", 00:31:26.807 "zoned": false, 00:31:26.807 "supported_io_types": { 00:31:26.807 "read": true, 00:31:26.807 "write": true, 00:31:26.807 "unmap": true, 00:31:26.807 "flush": true, 00:31:26.807 "reset": true, 00:31:26.807 "nvme_admin": false, 00:31:26.807 "nvme_io": false, 00:31:26.807 "nvme_io_md": false, 00:31:26.807 "write_zeroes": true, 00:31:26.807 "zcopy": true, 00:31:26.807 "get_zone_info": false, 00:31:26.807 "zone_management": false, 00:31:26.807 "zone_append": false, 00:31:26.807 "compare": false, 00:31:26.807 "compare_and_write": false, 00:31:26.807 "abort": true, 00:31:26.807 "seek_hole": false, 00:31:26.807 "seek_data": false, 00:31:26.807 "copy": true, 00:31:26.807 "nvme_iov_md": false 00:31:26.807 }, 00:31:26.807 "memory_domains": [ 00:31:26.807 { 00:31:26.807 "dma_device_id": "system", 00:31:26.807 "dma_device_type": 1 00:31:26.807 }, 00:31:26.807 { 00:31:26.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:26.807 "dma_device_type": 2 00:31:26.807 } 00:31:26.807 ], 00:31:26.807 "driver_specific": { 00:31:26.807 "passthru": { 00:31:26.807 "name": "pt2", 00:31:26.807 "base_bdev_name": "malloc2" 00:31:26.807 } 00:31:26.807 } 00:31:26.807 }' 00:31:26.807 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:26.807 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:27.066 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:27.326 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:27.326 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:31:27.326 [2024-07-11 02:37:17.717081] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.326 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' aad22d5e-446d-4aa1-acfc-702783419334 '!=' aad22d5e-446d-4aa1-acfc-702783419334 ']' 00:31:27.326 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:31:27.326 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:27.326 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:31:27.326 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:27.585 [2024-07-11 02:37:17.969516] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.585 02:37:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.585 02:37:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.585 02:37:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.844 02:37:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.844 "name": "raid_bdev1", 00:31:27.844 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:27.844 "strip_size_kb": 0, 00:31:27.844 "state": "online", 00:31:27.844 "raid_level": "raid1", 00:31:27.844 "superblock": true, 00:31:27.844 "num_base_bdevs": 2, 00:31:27.844 "num_base_bdevs_discovered": 1, 00:31:27.844 "num_base_bdevs_operational": 1, 00:31:27.844 "base_bdevs_list": [ 00:31:27.844 { 00:31:27.844 "name": null, 00:31:27.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.845 "is_configured": false, 00:31:27.845 "data_offset": 256, 00:31:27.845 "data_size": 7936 00:31:27.845 }, 00:31:27.845 { 00:31:27.845 "name": "pt2", 00:31:27.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:27.845 "is_configured": true, 00:31:27.845 "data_offset": 256, 00:31:27.845 "data_size": 7936 00:31:27.845 } 00:31:27.845 ] 00:31:27.845 }' 00:31:27.845 02:37:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.845 02:37:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:28.782 02:37:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:29.042 [2024-07-11 02:37:19.337113] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:29.042 [2024-07-11 02:37:19.337142] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:29.042 [2024-07-11 02:37:19.337198] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:29.042 [2024-07-11 02:37:19.337244] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:29.042 [2024-07-11 02:37:19.337256] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e59a90 name raid_bdev1, state offline 00:31:29.042 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.042 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:31:29.301 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:31:29.301 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:31:29.301 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:31:29.301 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:29.301 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:29.560 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:29.560 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:29.560 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:31:29.560 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:29.560 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:31:29.561 02:37:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:29.820 [2024-07-11 02:37:20.091088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:29.820 [2024-07-11 02:37:20.091136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.820 [2024-07-11 02:37:20.091154] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e58bc0 00:31:29.820 [2024-07-11 02:37:20.091167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.820 [2024-07-11 02:37:20.092592] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.820 [2024-07-11 02:37:20.092619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:29.820 [2024-07-11 02:37:20.092667] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:29.820 [2024-07-11 02:37:20.092699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:29.820 [2024-07-11 02:37:20.092785] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e5a3a0 00:31:29.820 [2024-07-11 02:37:20.092795] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:29.820 [2024-07-11 02:37:20.092856] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e5a9f0 00:31:29.820 [2024-07-11 02:37:20.092954] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e5a3a0 00:31:29.820 [2024-07-11 02:37:20.092964] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e5a3a0 00:31:29.820 [2024-07-11 02:37:20.093031] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:29.820 pt2 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.820 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.388 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:30.388 "name": "raid_bdev1", 00:31:30.388 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:30.388 "strip_size_kb": 0, 00:31:30.388 "state": "online", 00:31:30.388 "raid_level": "raid1", 00:31:30.388 "superblock": true, 00:31:30.388 "num_base_bdevs": 2, 00:31:30.388 "num_base_bdevs_discovered": 1, 00:31:30.388 "num_base_bdevs_operational": 1, 00:31:30.388 "base_bdevs_list": [ 00:31:30.388 { 00:31:30.388 "name": null, 00:31:30.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.388 "is_configured": false, 00:31:30.388 "data_offset": 256, 00:31:30.388 "data_size": 7936 00:31:30.388 }, 00:31:30.388 { 00:31:30.388 "name": "pt2", 00:31:30.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:30.388 "is_configured": true, 00:31:30.388 "data_offset": 256, 00:31:30.388 "data_size": 7936 00:31:30.388 } 00:31:30.388 ] 00:31:30.388 }' 00:31:30.388 02:37:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:30.388 02:37:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:30.955 02:37:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:31.522 [2024-07-11 02:37:21.715415] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:31.522 [2024-07-11 02:37:21.715446] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:31.522 [2024-07-11 02:37:21.715510] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:31.522 [2024-07-11 02:37:21.715557] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:31.522 [2024-07-11 02:37:21.715569] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e5a3a0 name raid_bdev1, state offline 00:31:31.522 02:37:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.522 02:37:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:31:31.780 02:37:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:31:31.780 02:37:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:31:31.780 02:37:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:31:31.780 02:37:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:32.349 [2024-07-11 02:37:22.485417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:32.349 [2024-07-11 02:37:22.485475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.349 [2024-07-11 02:37:22.485495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e78a90 00:31:32.349 [2024-07-11 02:37:22.485508] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.349 [2024-07-11 02:37:22.486986] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.349 [2024-07-11 02:37:22.487015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:32.349 [2024-07-11 02:37:22.487068] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:32.349 [2024-07-11 02:37:22.487096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:32.349 [2024-07-11 02:37:22.487191] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:32.349 [2024-07-11 02:37:22.487204] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:32.349 [2024-07-11 02:37:22.487221] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e5bc40 name raid_bdev1, state configuring 00:31:32.349 [2024-07-11 02:37:22.487245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:32.349 [2024-07-11 02:37:22.487302] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d1fc20 00:31:32.349 [2024-07-11 02:37:22.487312] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:32.349 [2024-07-11 02:37:22.487371] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e5b900 00:31:32.349 [2024-07-11 02:37:22.487467] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d1fc20 00:31:32.349 [2024-07-11 02:37:22.487477] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d1fc20 00:31:32.349 [2024-07-11 02:37:22.487547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.349 pt1 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.349 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.607 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.607 "name": "raid_bdev1", 00:31:32.607 "uuid": "aad22d5e-446d-4aa1-acfc-702783419334", 00:31:32.607 "strip_size_kb": 0, 00:31:32.607 "state": "online", 00:31:32.607 "raid_level": "raid1", 00:31:32.607 "superblock": true, 00:31:32.607 "num_base_bdevs": 2, 00:31:32.607 "num_base_bdevs_discovered": 1, 00:31:32.607 "num_base_bdevs_operational": 1, 00:31:32.607 "base_bdevs_list": [ 00:31:32.607 { 00:31:32.607 "name": null, 00:31:32.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.607 "is_configured": false, 00:31:32.607 "data_offset": 256, 00:31:32.607 "data_size": 7936 00:31:32.607 }, 00:31:32.607 { 00:31:32.607 "name": "pt2", 00:31:32.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:32.607 "is_configured": true, 00:31:32.607 "data_offset": 256, 00:31:32.607 "data_size": 7936 00:31:32.607 } 00:31:32.607 ] 00:31:32.607 }' 00:31:32.607 02:37:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.607 02:37:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:33.212 02:37:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:33.212 02:37:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:33.471 02:37:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:31:33.471 02:37:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:33.471 02:37:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:31:33.471 [2024-07-11 02:37:23.865310] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:33.471 02:37:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' aad22d5e-446d-4aa1-acfc-702783419334 '!=' aad22d5e-446d-4aa1-acfc-702783419334 ']' 00:31:33.471 02:37:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 2050284 00:31:33.471 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2050284 ']' 00:31:33.471 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 2050284 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2050284 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2050284' 00:31:33.730 killing process with pid 2050284 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 2050284 00:31:33.730 [2024-07-11 02:37:23.940564] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:33.730 [2024-07-11 02:37:23.940620] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:33.730 [2024-07-11 02:37:23.940662] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:33.730 [2024-07-11 02:37:23.940674] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d1fc20 name raid_bdev1, state offline 00:31:33.730 02:37:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 2050284 00:31:33.730 [2024-07-11 02:37:23.963723] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:33.990 02:37:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:31:33.990 00:31:33.990 real 0m18.201s 00:31:33.990 user 0m33.047s 00:31:33.990 sys 0m3.279s 00:31:33.990 02:37:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:33.990 02:37:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 ************************************ 00:31:33.990 END TEST raid_superblock_test_md_separate 00:31:33.990 ************************************ 00:31:33.990 02:37:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:33.990 02:37:24 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:31:33.990 02:37:24 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:31:33.990 02:37:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:33.990 02:37:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.990 02:37:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 ************************************ 00:31:33.990 START TEST raid_rebuild_test_sb_md_separate 00:31:33.990 ************************************ 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=2052870 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 2052870 /var/tmp/spdk-raid.sock 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2052870 ']' 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:33.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:33.990 02:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 [2024-07-11 02:37:24.324093] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:31:33.990 [2024-07-11 02:37:24.324164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052870 ] 00:31:33.990 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:33.990 Zero copy mechanism will not be used. 00:31:34.249 [2024-07-11 02:37:24.463716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.249 [2024-07-11 02:37:24.515754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.249 [2024-07-11 02:37:24.573197] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:34.249 [2024-07-11 02:37:24.573224] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:35.186 02:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:35.186 02:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:31:35.186 02:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:35.186 02:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:31:35.186 BaseBdev1_malloc 00:31:35.186 02:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:35.446 [2024-07-11 02:37:25.733467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:35.446 [2024-07-11 02:37:25.733514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.446 [2024-07-11 02:37:25.733536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa54330 00:31:35.446 [2024-07-11 02:37:25.733549] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.446 [2024-07-11 02:37:25.734884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.446 [2024-07-11 02:37:25.734913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:35.446 BaseBdev1 00:31:35.446 02:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:35.446 02:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:31:35.706 BaseBdev2_malloc 00:31:35.706 02:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:35.965 [2024-07-11 02:37:26.240205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:35.965 [2024-07-11 02:37:26.240251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.965 [2024-07-11 02:37:26.240274] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa0efa0 00:31:35.965 [2024-07-11 02:37:26.240286] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.965 [2024-07-11 02:37:26.241482] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.965 [2024-07-11 02:37:26.241510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:35.965 BaseBdev2 00:31:35.965 02:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:31:36.225 spare_malloc 00:31:36.225 02:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:36.484 spare_delay 00:31:36.484 02:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:36.743 [2024-07-11 02:37:26.995358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:36.743 [2024-07-11 02:37:26.995404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:36.743 [2024-07-11 02:37:26.995428] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f0720 00:31:36.743 [2024-07-11 02:37:26.995440] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:36.743 [2024-07-11 02:37:26.996766] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:36.743 [2024-07-11 02:37:26.996794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:36.743 spare 00:31:36.743 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:37.003 [2024-07-11 02:37:27.244049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:37.003 [2024-07-11 02:37:27.245271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:37.003 [2024-07-11 02:37:27.245428] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9f27a0 00:31:37.003 [2024-07-11 02:37:27.245441] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:37.003 [2024-07-11 02:37:27.245517] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa553e0 00:31:37.003 [2024-07-11 02:37:27.245628] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9f27a0 00:31:37.003 [2024-07-11 02:37:27.245638] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9f27a0 00:31:37.003 [2024-07-11 02:37:27.245704] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.003 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.261 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.261 "name": "raid_bdev1", 00:31:37.261 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:37.261 "strip_size_kb": 0, 00:31:37.261 "state": "online", 00:31:37.261 "raid_level": "raid1", 00:31:37.261 "superblock": true, 00:31:37.261 "num_base_bdevs": 2, 00:31:37.261 "num_base_bdevs_discovered": 2, 00:31:37.261 "num_base_bdevs_operational": 2, 00:31:37.261 "base_bdevs_list": [ 00:31:37.261 { 00:31:37.261 "name": "BaseBdev1", 00:31:37.261 "uuid": "c5f271a0-04e7-5220-93cb-ee6628e0ce17", 00:31:37.261 "is_configured": true, 00:31:37.261 "data_offset": 256, 00:31:37.261 "data_size": 7936 00:31:37.261 }, 00:31:37.261 { 00:31:37.261 "name": "BaseBdev2", 00:31:37.261 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:37.261 "is_configured": true, 00:31:37.261 "data_offset": 256, 00:31:37.261 "data_size": 7936 00:31:37.261 } 00:31:37.261 ] 00:31:37.261 }' 00:31:37.261 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.261 02:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:37.829 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:37.829 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:38.088 [2024-07-11 02:37:28.311106] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:38.088 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:31:38.088 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.088 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:38.348 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:38.606 [2024-07-11 02:37:28.816233] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa553e0 00:31:38.606 /dev/nbd0 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:38.606 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:38.607 1+0 records in 00:31:38.607 1+0 records out 00:31:38.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258372 s, 15.9 MB/s 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:31:38.607 02:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:31:39.545 7936+0 records in 00:31:39.545 7936+0 records out 00:31:39.545 32505856 bytes (33 MB, 31 MiB) copied, 0.756519 s, 43.0 MB/s 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:39.545 [2024-07-11 02:37:29.914715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:31:39.545 02:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:39.804 [2024-07-11 02:37:30.175475] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.804 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.063 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:40.063 "name": "raid_bdev1", 00:31:40.063 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:40.063 "strip_size_kb": 0, 00:31:40.063 "state": "online", 00:31:40.063 "raid_level": "raid1", 00:31:40.063 "superblock": true, 00:31:40.063 "num_base_bdevs": 2, 00:31:40.063 "num_base_bdevs_discovered": 1, 00:31:40.063 "num_base_bdevs_operational": 1, 00:31:40.063 "base_bdevs_list": [ 00:31:40.063 { 00:31:40.063 "name": null, 00:31:40.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.063 "is_configured": false, 00:31:40.063 "data_offset": 256, 00:31:40.063 "data_size": 7936 00:31:40.063 }, 00:31:40.063 { 00:31:40.063 "name": "BaseBdev2", 00:31:40.063 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:40.063 "is_configured": true, 00:31:40.063 "data_offset": 256, 00:31:40.063 "data_size": 7936 00:31:40.063 } 00:31:40.063 ] 00:31:40.063 }' 00:31:40.063 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:40.063 02:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:40.631 02:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:40.891 [2024-07-11 02:37:31.226237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:40.891 [2024-07-11 02:37:31.228467] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8b7720 00:31:40.891 [2024-07-11 02:37:31.230725] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:40.891 02:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:42.268 "name": "raid_bdev1", 00:31:42.268 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:42.268 "strip_size_kb": 0, 00:31:42.268 "state": "online", 00:31:42.268 "raid_level": "raid1", 00:31:42.268 "superblock": true, 00:31:42.268 "num_base_bdevs": 2, 00:31:42.268 "num_base_bdevs_discovered": 2, 00:31:42.268 "num_base_bdevs_operational": 2, 00:31:42.268 "process": { 00:31:42.268 "type": "rebuild", 00:31:42.268 "target": "spare", 00:31:42.268 "progress": { 00:31:42.268 "blocks": 3072, 00:31:42.268 "percent": 38 00:31:42.268 } 00:31:42.268 }, 00:31:42.268 "base_bdevs_list": [ 00:31:42.268 { 00:31:42.268 "name": "spare", 00:31:42.268 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:42.268 "is_configured": true, 00:31:42.268 "data_offset": 256, 00:31:42.268 "data_size": 7936 00:31:42.268 }, 00:31:42.268 { 00:31:42.268 "name": "BaseBdev2", 00:31:42.268 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:42.268 "is_configured": true, 00:31:42.268 "data_offset": 256, 00:31:42.268 "data_size": 7936 00:31:42.268 } 00:31:42.268 ] 00:31:42.268 }' 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:42.268 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:42.527 [2024-07-11 02:37:32.827906] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:42.527 [2024-07-11 02:37:32.843123] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:42.527 [2024-07-11 02:37:32.843164] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.527 [2024-07-11 02:37:32.843179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:42.527 [2024-07-11 02:37:32.843187] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.527 02:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.096 02:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:43.096 "name": "raid_bdev1", 00:31:43.096 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:43.096 "strip_size_kb": 0, 00:31:43.096 "state": "online", 00:31:43.096 "raid_level": "raid1", 00:31:43.096 "superblock": true, 00:31:43.096 "num_base_bdevs": 2, 00:31:43.096 "num_base_bdevs_discovered": 1, 00:31:43.096 "num_base_bdevs_operational": 1, 00:31:43.096 "base_bdevs_list": [ 00:31:43.096 { 00:31:43.096 "name": null, 00:31:43.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.096 "is_configured": false, 00:31:43.096 "data_offset": 256, 00:31:43.096 "data_size": 7936 00:31:43.096 }, 00:31:43.096 { 00:31:43.096 "name": "BaseBdev2", 00:31:43.096 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:43.096 "is_configured": true, 00:31:43.096 "data_offset": 256, 00:31:43.096 "data_size": 7936 00:31:43.096 } 00:31:43.096 ] 00:31:43.096 }' 00:31:43.096 02:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:43.096 02:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:43.681 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:43.681 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:43.681 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:43.681 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:43.681 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:43.681 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.681 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.939 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:43.939 "name": "raid_bdev1", 00:31:43.939 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:43.939 "strip_size_kb": 0, 00:31:43.939 "state": "online", 00:31:43.939 "raid_level": "raid1", 00:31:43.939 "superblock": true, 00:31:43.939 "num_base_bdevs": 2, 00:31:43.939 "num_base_bdevs_discovered": 1, 00:31:43.939 "num_base_bdevs_operational": 1, 00:31:43.939 "base_bdevs_list": [ 00:31:43.939 { 00:31:43.939 "name": null, 00:31:43.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.939 "is_configured": false, 00:31:43.939 "data_offset": 256, 00:31:43.939 "data_size": 7936 00:31:43.939 }, 00:31:43.939 { 00:31:43.939 "name": "BaseBdev2", 00:31:43.939 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:43.939 "is_configured": true, 00:31:43.939 "data_offset": 256, 00:31:43.939 "data_size": 7936 00:31:43.939 } 00:31:43.939 ] 00:31:43.939 }' 00:31:43.939 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:43.939 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:43.939 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:43.939 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:43.939 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:44.198 [2024-07-11 02:37:34.578908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:44.198 [2024-07-11 02:37:34.581127] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9f1b90 00:31:44.198 [2024-07-11 02:37:34.582553] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:44.198 02:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.578 "name": "raid_bdev1", 00:31:45.578 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:45.578 "strip_size_kb": 0, 00:31:45.578 "state": "online", 00:31:45.578 "raid_level": "raid1", 00:31:45.578 "superblock": true, 00:31:45.578 "num_base_bdevs": 2, 00:31:45.578 "num_base_bdevs_discovered": 2, 00:31:45.578 "num_base_bdevs_operational": 2, 00:31:45.578 "process": { 00:31:45.578 "type": "rebuild", 00:31:45.578 "target": "spare", 00:31:45.578 "progress": { 00:31:45.578 "blocks": 3072, 00:31:45.578 "percent": 38 00:31:45.578 } 00:31:45.578 }, 00:31:45.578 "base_bdevs_list": [ 00:31:45.578 { 00:31:45.578 "name": "spare", 00:31:45.578 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:45.578 "is_configured": true, 00:31:45.578 "data_offset": 256, 00:31:45.578 "data_size": 7936 00:31:45.578 }, 00:31:45.578 { 00:31:45.578 "name": "BaseBdev2", 00:31:45.578 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:45.578 "is_configured": true, 00:31:45.578 "data_offset": 256, 00:31:45.578 "data_size": 7936 00:31:45.578 } 00:31:45.578 ] 00:31:45.578 }' 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:45.578 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1108 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.578 02:37:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.838 02:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.838 "name": "raid_bdev1", 00:31:45.838 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:45.838 "strip_size_kb": 0, 00:31:45.838 "state": "online", 00:31:45.838 "raid_level": "raid1", 00:31:45.838 "superblock": true, 00:31:45.838 "num_base_bdevs": 2, 00:31:45.838 "num_base_bdevs_discovered": 2, 00:31:45.838 "num_base_bdevs_operational": 2, 00:31:45.838 "process": { 00:31:45.838 "type": "rebuild", 00:31:45.838 "target": "spare", 00:31:45.838 "progress": { 00:31:45.838 "blocks": 3840, 00:31:45.838 "percent": 48 00:31:45.838 } 00:31:45.838 }, 00:31:45.838 "base_bdevs_list": [ 00:31:45.838 { 00:31:45.838 "name": "spare", 00:31:45.838 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:45.838 "is_configured": true, 00:31:45.838 "data_offset": 256, 00:31:45.838 "data_size": 7936 00:31:45.838 }, 00:31:45.838 { 00:31:45.838 "name": "BaseBdev2", 00:31:45.838 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:45.838 "is_configured": true, 00:31:45.838 "data_offset": 256, 00:31:45.838 "data_size": 7936 00:31:45.838 } 00:31:45.838 ] 00:31:45.838 }' 00:31:45.838 02:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.838 02:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:46.097 02:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:46.097 02:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:46.097 02:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.035 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.294 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.294 "name": "raid_bdev1", 00:31:47.294 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:47.294 "strip_size_kb": 0, 00:31:47.294 "state": "online", 00:31:47.294 "raid_level": "raid1", 00:31:47.294 "superblock": true, 00:31:47.294 "num_base_bdevs": 2, 00:31:47.294 "num_base_bdevs_discovered": 2, 00:31:47.294 "num_base_bdevs_operational": 2, 00:31:47.294 "process": { 00:31:47.294 "type": "rebuild", 00:31:47.294 "target": "spare", 00:31:47.294 "progress": { 00:31:47.294 "blocks": 7424, 00:31:47.294 "percent": 93 00:31:47.294 } 00:31:47.294 }, 00:31:47.294 "base_bdevs_list": [ 00:31:47.294 { 00:31:47.294 "name": "spare", 00:31:47.294 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:47.294 "is_configured": true, 00:31:47.294 "data_offset": 256, 00:31:47.294 "data_size": 7936 00:31:47.294 }, 00:31:47.294 { 00:31:47.294 "name": "BaseBdev2", 00:31:47.294 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:47.294 "is_configured": true, 00:31:47.294 "data_offset": 256, 00:31:47.294 "data_size": 7936 00:31:47.294 } 00:31:47.294 ] 00:31:47.294 }' 00:31:47.294 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.294 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:47.294 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:47.294 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.294 02:37:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:47.294 [2024-07-11 02:37:37.706611] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:47.294 [2024-07-11 02:37:37.706669] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:47.294 [2024-07-11 02:37:37.706751] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.230 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.489 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.489 "name": "raid_bdev1", 00:31:48.489 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:48.489 "strip_size_kb": 0, 00:31:48.489 "state": "online", 00:31:48.489 "raid_level": "raid1", 00:31:48.489 "superblock": true, 00:31:48.489 "num_base_bdevs": 2, 00:31:48.489 "num_base_bdevs_discovered": 2, 00:31:48.489 "num_base_bdevs_operational": 2, 00:31:48.489 "base_bdevs_list": [ 00:31:48.489 { 00:31:48.489 "name": "spare", 00:31:48.489 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:48.489 "is_configured": true, 00:31:48.489 "data_offset": 256, 00:31:48.489 "data_size": 7936 00:31:48.489 }, 00:31:48.489 { 00:31:48.489 "name": "BaseBdev2", 00:31:48.489 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:48.489 "is_configured": true, 00:31:48.489 "data_offset": 256, 00:31:48.489 "data_size": 7936 00:31:48.489 } 00:31:48.489 ] 00:31:48.489 }' 00:31:48.489 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.748 02:37:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.006 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:49.006 "name": "raid_bdev1", 00:31:49.006 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:49.006 "strip_size_kb": 0, 00:31:49.006 "state": "online", 00:31:49.006 "raid_level": "raid1", 00:31:49.007 "superblock": true, 00:31:49.007 "num_base_bdevs": 2, 00:31:49.007 "num_base_bdevs_discovered": 2, 00:31:49.007 "num_base_bdevs_operational": 2, 00:31:49.007 "base_bdevs_list": [ 00:31:49.007 { 00:31:49.007 "name": "spare", 00:31:49.007 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:49.007 "is_configured": true, 00:31:49.007 "data_offset": 256, 00:31:49.007 "data_size": 7936 00:31:49.007 }, 00:31:49.007 { 00:31:49.007 "name": "BaseBdev2", 00:31:49.007 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:49.007 "is_configured": true, 00:31:49.007 "data_offset": 256, 00:31:49.007 "data_size": 7936 00:31:49.007 } 00:31:49.007 ] 00:31:49.007 }' 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.007 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.265 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:49.265 "name": "raid_bdev1", 00:31:49.265 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:49.265 "strip_size_kb": 0, 00:31:49.265 "state": "online", 00:31:49.265 "raid_level": "raid1", 00:31:49.265 "superblock": true, 00:31:49.265 "num_base_bdevs": 2, 00:31:49.265 "num_base_bdevs_discovered": 2, 00:31:49.266 "num_base_bdevs_operational": 2, 00:31:49.266 "base_bdevs_list": [ 00:31:49.266 { 00:31:49.266 "name": "spare", 00:31:49.266 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:49.266 "is_configured": true, 00:31:49.266 "data_offset": 256, 00:31:49.266 "data_size": 7936 00:31:49.266 }, 00:31:49.266 { 00:31:49.266 "name": "BaseBdev2", 00:31:49.266 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:49.266 "is_configured": true, 00:31:49.266 "data_offset": 256, 00:31:49.266 "data_size": 7936 00:31:49.266 } 00:31:49.266 ] 00:31:49.266 }' 00:31:49.266 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:49.266 02:37:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:49.833 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:50.092 [2024-07-11 02:37:40.393745] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:50.092 [2024-07-11 02:37:40.393780] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:50.092 [2024-07-11 02:37:40.393840] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:50.092 [2024-07-11 02:37:40.393898] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:50.092 [2024-07-11 02:37:40.393910] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9f27a0 name raid_bdev1, state offline 00:31:50.092 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.092 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:50.351 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:50.610 /dev/nbd0 00:31:50.610 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:50.610 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:50.610 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:50.610 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.611 1+0 records in 00:31:50.611 1+0 records out 00:31:50.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213273 s, 19.2 MB/s 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:50.611 02:37:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:50.870 /dev/nbd1 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.870 1+0 records in 00:31:50.870 1+0 records out 00:31:50.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309976 s, 13.2 MB/s 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:50.870 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:51.130 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:51.130 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:51.130 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:51.130 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:51.130 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:31:51.130 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.130 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.389 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:51.647 02:37:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:51.915 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:52.173 [2024-07-11 02:37:42.345840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:52.173 [2024-07-11 02:37:42.345887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:52.173 [2024-07-11 02:37:42.345909] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f4eb0 00:31:52.173 [2024-07-11 02:37:42.345921] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:52.173 [2024-07-11 02:37:42.347360] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:52.173 [2024-07-11 02:37:42.347390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:52.173 [2024-07-11 02:37:42.347448] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:52.173 [2024-07-11 02:37:42.347473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:52.173 [2024-07-11 02:37:42.347564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:52.173 spare 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.173 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.173 [2024-07-11 02:37:42.447870] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9f48d0 00:31:52.173 [2024-07-11 02:37:42.447887] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:52.173 [2024-07-11 02:37:42.447964] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8b8600 00:31:52.173 [2024-07-11 02:37:42.448082] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9f48d0 00:31:52.173 [2024-07-11 02:37:42.448092] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9f48d0 00:31:52.173 [2024-07-11 02:37:42.448166] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:52.432 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.432 "name": "raid_bdev1", 00:31:52.432 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:52.432 "strip_size_kb": 0, 00:31:52.432 "state": "online", 00:31:52.432 "raid_level": "raid1", 00:31:52.432 "superblock": true, 00:31:52.432 "num_base_bdevs": 2, 00:31:52.432 "num_base_bdevs_discovered": 2, 00:31:52.432 "num_base_bdevs_operational": 2, 00:31:52.432 "base_bdevs_list": [ 00:31:52.432 { 00:31:52.432 "name": "spare", 00:31:52.432 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:52.432 "is_configured": true, 00:31:52.432 "data_offset": 256, 00:31:52.432 "data_size": 7936 00:31:52.432 }, 00:31:52.432 { 00:31:52.432 "name": "BaseBdev2", 00:31:52.432 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:52.432 "is_configured": true, 00:31:52.432 "data_offset": 256, 00:31:52.432 "data_size": 7936 00:31:52.432 } 00:31:52.432 ] 00:31:52.432 }' 00:31:52.432 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.432 02:37:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:52.999 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:52.999 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:52.999 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:52.999 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:52.999 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:52.999 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.999 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.258 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.258 "name": "raid_bdev1", 00:31:53.258 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:53.258 "strip_size_kb": 0, 00:31:53.258 "state": "online", 00:31:53.258 "raid_level": "raid1", 00:31:53.258 "superblock": true, 00:31:53.258 "num_base_bdevs": 2, 00:31:53.258 "num_base_bdevs_discovered": 2, 00:31:53.258 "num_base_bdevs_operational": 2, 00:31:53.258 "base_bdevs_list": [ 00:31:53.258 { 00:31:53.258 "name": "spare", 00:31:53.258 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:53.258 "is_configured": true, 00:31:53.258 "data_offset": 256, 00:31:53.258 "data_size": 7936 00:31:53.258 }, 00:31:53.258 { 00:31:53.258 "name": "BaseBdev2", 00:31:53.258 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:53.258 "is_configured": true, 00:31:53.258 "data_offset": 256, 00:31:53.258 "data_size": 7936 00:31:53.258 } 00:31:53.258 ] 00:31:53.258 }' 00:31:53.258 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:53.258 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:53.258 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:53.258 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:53.258 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.258 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:53.516 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:53.516 02:37:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:54.082 [2024-07-11 02:37:44.319198] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.082 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.341 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.341 "name": "raid_bdev1", 00:31:54.341 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:54.341 "strip_size_kb": 0, 00:31:54.341 "state": "online", 00:31:54.341 "raid_level": "raid1", 00:31:54.341 "superblock": true, 00:31:54.341 "num_base_bdevs": 2, 00:31:54.341 "num_base_bdevs_discovered": 1, 00:31:54.341 "num_base_bdevs_operational": 1, 00:31:54.341 "base_bdevs_list": [ 00:31:54.341 { 00:31:54.341 "name": null, 00:31:54.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.341 "is_configured": false, 00:31:54.341 "data_offset": 256, 00:31:54.341 "data_size": 7936 00:31:54.341 }, 00:31:54.341 { 00:31:54.341 "name": "BaseBdev2", 00:31:54.341 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:54.341 "is_configured": true, 00:31:54.341 "data_offset": 256, 00:31:54.341 "data_size": 7936 00:31:54.341 } 00:31:54.341 ] 00:31:54.341 }' 00:31:54.341 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.341 02:37:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:54.908 02:37:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:55.541 [2024-07-11 02:37:45.674820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:55.541 [2024-07-11 02:37:45.674972] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:55.541 [2024-07-11 02:37:45.674988] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:55.541 [2024-07-11 02:37:45.675015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:55.541 [2024-07-11 02:37:45.677096] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8b8600 00:31:55.541 [2024-07-11 02:37:45.678490] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:55.541 02:37:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:56.477 02:37:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:56.477 02:37:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:56.477 02:37:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:56.477 02:37:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:56.477 02:37:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:56.477 02:37:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.477 02:37:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.043 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.043 "name": "raid_bdev1", 00:31:57.043 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:57.043 "strip_size_kb": 0, 00:31:57.043 "state": "online", 00:31:57.043 "raid_level": "raid1", 00:31:57.043 "superblock": true, 00:31:57.043 "num_base_bdevs": 2, 00:31:57.043 "num_base_bdevs_discovered": 2, 00:31:57.043 "num_base_bdevs_operational": 2, 00:31:57.043 "process": { 00:31:57.043 "type": "rebuild", 00:31:57.043 "target": "spare", 00:31:57.043 "progress": { 00:31:57.043 "blocks": 3840, 00:31:57.043 "percent": 48 00:31:57.043 } 00:31:57.043 }, 00:31:57.043 "base_bdevs_list": [ 00:31:57.043 { 00:31:57.043 "name": "spare", 00:31:57.043 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:57.043 "is_configured": true, 00:31:57.043 "data_offset": 256, 00:31:57.043 "data_size": 7936 00:31:57.043 }, 00:31:57.043 { 00:31:57.043 "name": "BaseBdev2", 00:31:57.043 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:57.043 "is_configured": true, 00:31:57.043 "data_offset": 256, 00:31:57.043 "data_size": 7936 00:31:57.043 } 00:31:57.043 ] 00:31:57.043 }' 00:31:57.043 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:57.043 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:57.043 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:57.043 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:57.043 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:57.300 [2024-07-11 02:37:47.545340] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:57.300 [2024-07-11 02:37:47.593428] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:57.300 [2024-07-11 02:37:47.593489] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.300 [2024-07-11 02:37:47.593505] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:57.300 [2024-07-11 02:37:47.593514] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.300 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.558 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:57.558 "name": "raid_bdev1", 00:31:57.558 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:57.558 "strip_size_kb": 0, 00:31:57.558 "state": "online", 00:31:57.558 "raid_level": "raid1", 00:31:57.558 "superblock": true, 00:31:57.558 "num_base_bdevs": 2, 00:31:57.558 "num_base_bdevs_discovered": 1, 00:31:57.558 "num_base_bdevs_operational": 1, 00:31:57.558 "base_bdevs_list": [ 00:31:57.558 { 00:31:57.558 "name": null, 00:31:57.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.558 "is_configured": false, 00:31:57.558 "data_offset": 256, 00:31:57.558 "data_size": 7936 00:31:57.558 }, 00:31:57.558 { 00:31:57.558 "name": "BaseBdev2", 00:31:57.558 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:57.558 "is_configured": true, 00:31:57.558 "data_offset": 256, 00:31:57.558 "data_size": 7936 00:31:57.558 } 00:31:57.558 ] 00:31:57.558 }' 00:31:57.558 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:57.558 02:37:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:58.125 02:37:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:58.384 [2024-07-11 02:37:48.624473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:58.384 [2024-07-11 02:37:48.624525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.384 [2024-07-11 02:37:48.624547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f3e20 00:31:58.384 [2024-07-11 02:37:48.624559] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.384 [2024-07-11 02:37:48.624795] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.384 [2024-07-11 02:37:48.624812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:58.384 [2024-07-11 02:37:48.624872] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:58.384 [2024-07-11 02:37:48.624884] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:58.384 [2024-07-11 02:37:48.624894] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:58.384 [2024-07-11 02:37:48.624912] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:58.384 [2024-07-11 02:37:48.627008] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8b8600 00:31:58.384 [2024-07-11 02:37:48.628319] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:58.384 spare 00:31:58.384 02:37:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:59.319 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:59.319 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:59.319 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:59.319 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:59.319 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:59.319 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.319 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.576 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.576 "name": "raid_bdev1", 00:31:59.576 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:31:59.576 "strip_size_kb": 0, 00:31:59.576 "state": "online", 00:31:59.576 "raid_level": "raid1", 00:31:59.576 "superblock": true, 00:31:59.576 "num_base_bdevs": 2, 00:31:59.576 "num_base_bdevs_discovered": 2, 00:31:59.576 "num_base_bdevs_operational": 2, 00:31:59.576 "process": { 00:31:59.576 "type": "rebuild", 00:31:59.576 "target": "spare", 00:31:59.576 "progress": { 00:31:59.576 "blocks": 3072, 00:31:59.576 "percent": 38 00:31:59.576 } 00:31:59.576 }, 00:31:59.576 "base_bdevs_list": [ 00:31:59.576 { 00:31:59.576 "name": "spare", 00:31:59.576 "uuid": "27d723c1-e0e8-567a-bc7e-56af88badfe2", 00:31:59.576 "is_configured": true, 00:31:59.576 "data_offset": 256, 00:31:59.576 "data_size": 7936 00:31:59.576 }, 00:31:59.576 { 00:31:59.576 "name": "BaseBdev2", 00:31:59.576 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:31:59.576 "is_configured": true, 00:31:59.576 "data_offset": 256, 00:31:59.576 "data_size": 7936 00:31:59.576 } 00:31:59.576 ] 00:31:59.576 }' 00:31:59.576 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:59.576 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:59.576 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:59.576 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:59.576 02:37:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:59.907 [2024-07-11 02:37:50.221682] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.907 [2024-07-11 02:37:50.240803] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:59.907 [2024-07-11 02:37:50.240845] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.907 [2024-07-11 02:37:50.240861] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.907 [2024-07-11 02:37:50.240869] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.907 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.166 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:00.166 "name": "raid_bdev1", 00:32:00.166 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:32:00.166 "strip_size_kb": 0, 00:32:00.166 "state": "online", 00:32:00.166 "raid_level": "raid1", 00:32:00.166 "superblock": true, 00:32:00.166 "num_base_bdevs": 2, 00:32:00.166 "num_base_bdevs_discovered": 1, 00:32:00.166 "num_base_bdevs_operational": 1, 00:32:00.166 "base_bdevs_list": [ 00:32:00.166 { 00:32:00.166 "name": null, 00:32:00.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.166 "is_configured": false, 00:32:00.166 "data_offset": 256, 00:32:00.166 "data_size": 7936 00:32:00.166 }, 00:32:00.166 { 00:32:00.166 "name": "BaseBdev2", 00:32:00.166 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:32:00.166 "is_configured": true, 00:32:00.166 "data_offset": 256, 00:32:00.166 "data_size": 7936 00:32:00.166 } 00:32:00.166 ] 00:32:00.166 }' 00:32:00.166 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:00.166 02:37:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:00.735 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:00.735 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:00.735 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:00.735 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:00.735 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:00.735 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.735 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.994 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:00.994 "name": "raid_bdev1", 00:32:00.994 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:32:00.994 "strip_size_kb": 0, 00:32:00.994 "state": "online", 00:32:00.994 "raid_level": "raid1", 00:32:00.994 "superblock": true, 00:32:00.994 "num_base_bdevs": 2, 00:32:00.994 "num_base_bdevs_discovered": 1, 00:32:00.994 "num_base_bdevs_operational": 1, 00:32:00.994 "base_bdevs_list": [ 00:32:00.994 { 00:32:00.994 "name": null, 00:32:00.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.994 "is_configured": false, 00:32:00.994 "data_offset": 256, 00:32:00.994 "data_size": 7936 00:32:00.994 }, 00:32:00.994 { 00:32:00.994 "name": "BaseBdev2", 00:32:00.994 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:32:00.994 "is_configured": true, 00:32:00.994 "data_offset": 256, 00:32:00.994 "data_size": 7936 00:32:00.994 } 00:32:00.994 ] 00:32:00.994 }' 00:32:00.994 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:01.253 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:01.253 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:01.253 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:01.254 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:01.513 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:01.513 [2024-07-11 02:37:51.928427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:01.513 [2024-07-11 02:37:51.928476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.513 [2024-07-11 02:37:51.928497] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f4670 00:32:01.513 [2024-07-11 02:37:51.928509] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.513 [2024-07-11 02:37:51.928704] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.513 [2024-07-11 02:37:51.928720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:01.513 [2024-07-11 02:37:51.928776] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:01.513 [2024-07-11 02:37:51.928789] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:01.513 [2024-07-11 02:37:51.928799] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:01.513 BaseBdev1 00:32:01.772 02:37:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.708 02:37:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.967 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.967 "name": "raid_bdev1", 00:32:02.967 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:32:02.967 "strip_size_kb": 0, 00:32:02.968 "state": "online", 00:32:02.968 "raid_level": "raid1", 00:32:02.968 "superblock": true, 00:32:02.968 "num_base_bdevs": 2, 00:32:02.968 "num_base_bdevs_discovered": 1, 00:32:02.968 "num_base_bdevs_operational": 1, 00:32:02.968 "base_bdevs_list": [ 00:32:02.968 { 00:32:02.968 "name": null, 00:32:02.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.968 "is_configured": false, 00:32:02.968 "data_offset": 256, 00:32:02.968 "data_size": 7936 00:32:02.968 }, 00:32:02.968 { 00:32:02.968 "name": "BaseBdev2", 00:32:02.968 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:32:02.968 "is_configured": true, 00:32:02.968 "data_offset": 256, 00:32:02.968 "data_size": 7936 00:32:02.968 } 00:32:02.968 ] 00:32:02.968 }' 00:32:02.968 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.968 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:03.535 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:03.535 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:03.535 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:03.535 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:03.535 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:03.535 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.535 02:37:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:03.794 "name": "raid_bdev1", 00:32:03.794 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:32:03.794 "strip_size_kb": 0, 00:32:03.794 "state": "online", 00:32:03.794 "raid_level": "raid1", 00:32:03.794 "superblock": true, 00:32:03.794 "num_base_bdevs": 2, 00:32:03.794 "num_base_bdevs_discovered": 1, 00:32:03.794 "num_base_bdevs_operational": 1, 00:32:03.794 "base_bdevs_list": [ 00:32:03.794 { 00:32:03.794 "name": null, 00:32:03.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.794 "is_configured": false, 00:32:03.794 "data_offset": 256, 00:32:03.794 "data_size": 7936 00:32:03.794 }, 00:32:03.794 { 00:32:03.794 "name": "BaseBdev2", 00:32:03.794 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:32:03.794 "is_configured": true, 00:32:03.794 "data_offset": 256, 00:32:03.794 "data_size": 7936 00:32:03.794 } 00:32:03.794 ] 00:32:03.794 }' 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:03.794 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:04.053 [2024-07-11 02:37:54.318843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:04.053 [2024-07-11 02:37:54.318966] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:04.053 [2024-07-11 02:37:54.318982] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:04.053 request: 00:32:04.053 { 00:32:04.053 "base_bdev": "BaseBdev1", 00:32:04.053 "raid_bdev": "raid_bdev1", 00:32:04.053 "method": "bdev_raid_add_base_bdev", 00:32:04.053 "req_id": 1 00:32:04.053 } 00:32:04.053 Got JSON-RPC error response 00:32:04.053 response: 00:32:04.053 { 00:32:04.053 "code": -22, 00:32:04.053 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:04.053 } 00:32:04.053 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:32:04.053 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:04.053 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:04.053 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:04.053 02:37:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.989 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.990 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.990 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.990 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.990 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.249 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:05.249 "name": "raid_bdev1", 00:32:05.249 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:32:05.249 "strip_size_kb": 0, 00:32:05.249 "state": "online", 00:32:05.249 "raid_level": "raid1", 00:32:05.249 "superblock": true, 00:32:05.249 "num_base_bdevs": 2, 00:32:05.249 "num_base_bdevs_discovered": 1, 00:32:05.249 "num_base_bdevs_operational": 1, 00:32:05.249 "base_bdevs_list": [ 00:32:05.249 { 00:32:05.249 "name": null, 00:32:05.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.249 "is_configured": false, 00:32:05.249 "data_offset": 256, 00:32:05.249 "data_size": 7936 00:32:05.249 }, 00:32:05.249 { 00:32:05.249 "name": "BaseBdev2", 00:32:05.249 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:32:05.249 "is_configured": true, 00:32:05.249 "data_offset": 256, 00:32:05.249 "data_size": 7936 00:32:05.249 } 00:32:05.249 ] 00:32:05.249 }' 00:32:05.249 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:05.249 02:37:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:05.817 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.817 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:05.817 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:05.817 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:05.817 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:05.817 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.817 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:06.076 "name": "raid_bdev1", 00:32:06.076 "uuid": "43dc1156-eff4-41e4-a065-c257e4ae081c", 00:32:06.076 "strip_size_kb": 0, 00:32:06.076 "state": "online", 00:32:06.076 "raid_level": "raid1", 00:32:06.076 "superblock": true, 00:32:06.076 "num_base_bdevs": 2, 00:32:06.076 "num_base_bdevs_discovered": 1, 00:32:06.076 "num_base_bdevs_operational": 1, 00:32:06.076 "base_bdevs_list": [ 00:32:06.076 { 00:32:06.076 "name": null, 00:32:06.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.076 "is_configured": false, 00:32:06.076 "data_offset": 256, 00:32:06.076 "data_size": 7936 00:32:06.076 }, 00:32:06.076 { 00:32:06.076 "name": "BaseBdev2", 00:32:06.076 "uuid": "d2c6647b-1330-5209-85c7-07d3d4583f10", 00:32:06.076 "is_configured": true, 00:32:06.076 "data_offset": 256, 00:32:06.076 "data_size": 7936 00:32:06.076 } 00:32:06.076 ] 00:32:06.076 }' 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 2052870 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2052870 ']' 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2052870 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:06.076 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2052870 00:32:06.335 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:06.335 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:06.335 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2052870' 00:32:06.335 killing process with pid 2052870 00:32:06.335 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2052870 00:32:06.336 Received shutdown signal, test time was about 60.000000 seconds 00:32:06.336 00:32:06.336 Latency(us) 00:32:06.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.336 =================================================================================================================== 00:32:06.336 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:06.336 [2024-07-11 02:37:56.510251] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:06.336 [2024-07-11 02:37:56.510342] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:06.336 [2024-07-11 02:37:56.510383] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:06.336 [2024-07-11 02:37:56.510396] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9f48d0 name raid_bdev1, state offline 00:32:06.336 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2052870 00:32:06.336 [2024-07-11 02:37:56.545454] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:06.336 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:32:06.336 00:32:06.336 real 0m32.499s 00:32:06.336 user 0m50.735s 00:32:06.336 sys 0m5.531s 00:32:06.336 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:06.336 02:37:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:06.336 ************************************ 00:32:06.336 END TEST raid_rebuild_test_sb_md_separate 00:32:06.336 ************************************ 00:32:06.595 02:37:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:06.595 02:37:56 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:32:06.595 02:37:56 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:32:06.595 02:37:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:06.596 02:37:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.596 02:37:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:06.596 ************************************ 00:32:06.596 START TEST raid_state_function_test_sb_md_interleaved 00:32:06.596 ************************************ 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=2057547 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2057547' 00:32:06.596 Process raid pid: 2057547 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 2057547 /var/tmp/spdk-raid.sock 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2057547 ']' 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:06.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:06.596 02:37:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:06.596 [2024-07-11 02:37:56.905491] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:32:06.596 [2024-07-11 02:37:56.905553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.855 [2024-07-11 02:37:57.040903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.855 [2024-07-11 02:37:57.090644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.855 [2024-07-11 02:37:57.148967] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:06.855 [2024-07-11 02:37:57.149000] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:07.422 02:37:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:07.422 02:37:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:32:07.422 02:37:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:07.682 [2024-07-11 02:37:58.066156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:07.682 [2024-07-11 02:37:58.066199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:07.682 [2024-07-11 02:37:58.066211] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:07.682 [2024-07-11 02:37:58.066223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.682 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:07.941 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:07.941 "name": "Existed_Raid", 00:32:07.941 "uuid": "86e15eb8-80a9-4658-9eaa-d9d6252fc015", 00:32:07.941 "strip_size_kb": 0, 00:32:07.941 "state": "configuring", 00:32:07.941 "raid_level": "raid1", 00:32:07.941 "superblock": true, 00:32:07.941 "num_base_bdevs": 2, 00:32:07.941 "num_base_bdevs_discovered": 0, 00:32:07.941 "num_base_bdevs_operational": 2, 00:32:07.941 "base_bdevs_list": [ 00:32:07.941 { 00:32:07.941 "name": "BaseBdev1", 00:32:07.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.941 "is_configured": false, 00:32:07.941 "data_offset": 0, 00:32:07.941 "data_size": 0 00:32:07.941 }, 00:32:07.941 { 00:32:07.941 "name": "BaseBdev2", 00:32:07.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.941 "is_configured": false, 00:32:07.941 "data_offset": 0, 00:32:07.941 "data_size": 0 00:32:07.941 } 00:32:07.941 ] 00:32:07.941 }' 00:32:07.941 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:07.941 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:08.510 02:37:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:08.770 [2024-07-11 02:37:59.157036] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:08.770 [2024-07-11 02:37:59.157067] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1694710 name Existed_Raid, state configuring 00:32:08.770 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:09.029 [2024-07-11 02:37:59.337533] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:09.029 [2024-07-11 02:37:59.337562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:09.029 [2024-07-11 02:37:59.337572] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:09.029 [2024-07-11 02:37:59.337584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:09.029 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:32:09.289 [2024-07-11 02:37:59.527882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:09.289 BaseBdev1 00:32:09.289 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:09.289 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:09.289 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:09.289 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:32:09.289 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:09.289 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:09.289 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:09.548 [ 00:32:09.548 { 00:32:09.548 "name": "BaseBdev1", 00:32:09.548 "aliases": [ 00:32:09.548 "b40974ac-5784-4369-9d59-adfed8d42104" 00:32:09.548 ], 00:32:09.548 "product_name": "Malloc disk", 00:32:09.548 "block_size": 4128, 00:32:09.548 "num_blocks": 8192, 00:32:09.548 "uuid": "b40974ac-5784-4369-9d59-adfed8d42104", 00:32:09.548 "md_size": 32, 00:32:09.548 "md_interleave": true, 00:32:09.548 "dif_type": 0, 00:32:09.548 "assigned_rate_limits": { 00:32:09.548 "rw_ios_per_sec": 0, 00:32:09.548 "rw_mbytes_per_sec": 0, 00:32:09.548 "r_mbytes_per_sec": 0, 00:32:09.548 "w_mbytes_per_sec": 0 00:32:09.548 }, 00:32:09.548 "claimed": true, 00:32:09.548 "claim_type": "exclusive_write", 00:32:09.548 "zoned": false, 00:32:09.548 "supported_io_types": { 00:32:09.548 "read": true, 00:32:09.548 "write": true, 00:32:09.548 "unmap": true, 00:32:09.548 "flush": true, 00:32:09.548 "reset": true, 00:32:09.548 "nvme_admin": false, 00:32:09.548 "nvme_io": false, 00:32:09.548 "nvme_io_md": false, 00:32:09.548 "write_zeroes": true, 00:32:09.548 "zcopy": true, 00:32:09.548 "get_zone_info": false, 00:32:09.548 "zone_management": false, 00:32:09.548 "zone_append": false, 00:32:09.548 "compare": false, 00:32:09.548 "compare_and_write": false, 00:32:09.548 "abort": true, 00:32:09.548 "seek_hole": false, 00:32:09.548 "seek_data": false, 00:32:09.548 "copy": true, 00:32:09.548 "nvme_iov_md": false 00:32:09.548 }, 00:32:09.548 "memory_domains": [ 00:32:09.548 { 00:32:09.548 "dma_device_id": "system", 00:32:09.548 "dma_device_type": 1 00:32:09.548 }, 00:32:09.548 { 00:32:09.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.548 "dma_device_type": 2 00:32:09.548 } 00:32:09.548 ], 00:32:09.548 "driver_specific": {} 00:32:09.548 } 00:32:09.548 ] 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.548 02:37:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:09.808 02:38:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:09.808 "name": "Existed_Raid", 00:32:09.808 "uuid": "0bb8edcb-dd44-40a5-a90c-f47ebdc8ec7e", 00:32:09.808 "strip_size_kb": 0, 00:32:09.808 "state": "configuring", 00:32:09.808 "raid_level": "raid1", 00:32:09.808 "superblock": true, 00:32:09.808 "num_base_bdevs": 2, 00:32:09.808 "num_base_bdevs_discovered": 1, 00:32:09.808 "num_base_bdevs_operational": 2, 00:32:09.808 "base_bdevs_list": [ 00:32:09.808 { 00:32:09.808 "name": "BaseBdev1", 00:32:09.808 "uuid": "b40974ac-5784-4369-9d59-adfed8d42104", 00:32:09.808 "is_configured": true, 00:32:09.808 "data_offset": 256, 00:32:09.808 "data_size": 7936 00:32:09.808 }, 00:32:09.808 { 00:32:09.808 "name": "BaseBdev2", 00:32:09.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:09.808 "is_configured": false, 00:32:09.808 "data_offset": 0, 00:32:09.808 "data_size": 0 00:32:09.808 } 00:32:09.808 ] 00:32:09.808 }' 00:32:09.808 02:38:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:09.808 02:38:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:10.376 02:38:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:10.635 [2024-07-11 02:38:01.003841] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:10.635 [2024-07-11 02:38:01.003881] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1694040 name Existed_Raid, state configuring 00:32:10.635 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:10.894 [2024-07-11 02:38:01.252532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:10.894 [2024-07-11 02:38:01.253903] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:10.894 [2024-07-11 02:38:01.253934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.894 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:11.153 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:11.153 "name": "Existed_Raid", 00:32:11.153 "uuid": "32a95a2d-6750-4446-90e5-6ec1f59aa000", 00:32:11.153 "strip_size_kb": 0, 00:32:11.153 "state": "configuring", 00:32:11.153 "raid_level": "raid1", 00:32:11.153 "superblock": true, 00:32:11.153 "num_base_bdevs": 2, 00:32:11.153 "num_base_bdevs_discovered": 1, 00:32:11.153 "num_base_bdevs_operational": 2, 00:32:11.153 "base_bdevs_list": [ 00:32:11.153 { 00:32:11.153 "name": "BaseBdev1", 00:32:11.153 "uuid": "b40974ac-5784-4369-9d59-adfed8d42104", 00:32:11.153 "is_configured": true, 00:32:11.153 "data_offset": 256, 00:32:11.153 "data_size": 7936 00:32:11.153 }, 00:32:11.153 { 00:32:11.153 "name": "BaseBdev2", 00:32:11.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.153 "is_configured": false, 00:32:11.153 "data_offset": 0, 00:32:11.153 "data_size": 0 00:32:11.153 } 00:32:11.153 ] 00:32:11.153 }' 00:32:11.153 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:11.153 02:38:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:32:12.091 [2024-07-11 02:38:02.487196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:12.091 [2024-07-11 02:38:02.487323] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1830290 00:32:12.091 [2024-07-11 02:38:02.487336] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:12.091 [2024-07-11 02:38:02.487397] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18316c0 00:32:12.091 [2024-07-11 02:38:02.487475] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1830290 00:32:12.091 [2024-07-11 02:38:02.487485] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1830290 00:32:12.091 [2024-07-11 02:38:02.487538] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.091 BaseBdev2 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:12.091 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:12.350 02:38:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:12.609 [ 00:32:12.609 { 00:32:12.609 "name": "BaseBdev2", 00:32:12.609 "aliases": [ 00:32:12.609 "dbda86f1-a0ea-4123-a9d9-e978325d3760" 00:32:12.609 ], 00:32:12.609 "product_name": "Malloc disk", 00:32:12.609 "block_size": 4128, 00:32:12.609 "num_blocks": 8192, 00:32:12.609 "uuid": "dbda86f1-a0ea-4123-a9d9-e978325d3760", 00:32:12.609 "md_size": 32, 00:32:12.609 "md_interleave": true, 00:32:12.609 "dif_type": 0, 00:32:12.609 "assigned_rate_limits": { 00:32:12.609 "rw_ios_per_sec": 0, 00:32:12.609 "rw_mbytes_per_sec": 0, 00:32:12.609 "r_mbytes_per_sec": 0, 00:32:12.609 "w_mbytes_per_sec": 0 00:32:12.609 }, 00:32:12.609 "claimed": true, 00:32:12.609 "claim_type": "exclusive_write", 00:32:12.609 "zoned": false, 00:32:12.609 "supported_io_types": { 00:32:12.609 "read": true, 00:32:12.609 "write": true, 00:32:12.609 "unmap": true, 00:32:12.609 "flush": true, 00:32:12.609 "reset": true, 00:32:12.609 "nvme_admin": false, 00:32:12.609 "nvme_io": false, 00:32:12.609 "nvme_io_md": false, 00:32:12.609 "write_zeroes": true, 00:32:12.609 "zcopy": true, 00:32:12.609 "get_zone_info": false, 00:32:12.609 "zone_management": false, 00:32:12.609 "zone_append": false, 00:32:12.609 "compare": false, 00:32:12.609 "compare_and_write": false, 00:32:12.609 "abort": true, 00:32:12.609 "seek_hole": false, 00:32:12.609 "seek_data": false, 00:32:12.609 "copy": true, 00:32:12.609 "nvme_iov_md": false 00:32:12.609 }, 00:32:12.609 "memory_domains": [ 00:32:12.609 { 00:32:12.609 "dma_device_id": "system", 00:32:12.609 "dma_device_type": 1 00:32:12.609 }, 00:32:12.609 { 00:32:12.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.609 "dma_device_type": 2 00:32:12.609 } 00:32:12.609 ], 00:32:12.609 "driver_specific": {} 00:32:12.609 } 00:32:12.609 ] 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.609 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.869 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:12.869 "name": "Existed_Raid", 00:32:12.869 "uuid": "32a95a2d-6750-4446-90e5-6ec1f59aa000", 00:32:12.869 "strip_size_kb": 0, 00:32:12.869 "state": "online", 00:32:12.869 "raid_level": "raid1", 00:32:12.869 "superblock": true, 00:32:12.869 "num_base_bdevs": 2, 00:32:12.869 "num_base_bdevs_discovered": 2, 00:32:12.869 "num_base_bdevs_operational": 2, 00:32:12.869 "base_bdevs_list": [ 00:32:12.869 { 00:32:12.869 "name": "BaseBdev1", 00:32:12.869 "uuid": "b40974ac-5784-4369-9d59-adfed8d42104", 00:32:12.869 "is_configured": true, 00:32:12.869 "data_offset": 256, 00:32:12.869 "data_size": 7936 00:32:12.869 }, 00:32:12.869 { 00:32:12.869 "name": "BaseBdev2", 00:32:12.869 "uuid": "dbda86f1-a0ea-4123-a9d9-e978325d3760", 00:32:12.869 "is_configured": true, 00:32:12.869 "data_offset": 256, 00:32:12.869 "data_size": 7936 00:32:12.869 } 00:32:12.869 ] 00:32:12.869 }' 00:32:12.869 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:12.869 02:38:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:14.247 [2024-07-11 02:38:04.480823] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:14.247 "name": "Existed_Raid", 00:32:14.247 "aliases": [ 00:32:14.247 "32a95a2d-6750-4446-90e5-6ec1f59aa000" 00:32:14.247 ], 00:32:14.247 "product_name": "Raid Volume", 00:32:14.247 "block_size": 4128, 00:32:14.247 "num_blocks": 7936, 00:32:14.247 "uuid": "32a95a2d-6750-4446-90e5-6ec1f59aa000", 00:32:14.247 "md_size": 32, 00:32:14.247 "md_interleave": true, 00:32:14.247 "dif_type": 0, 00:32:14.247 "assigned_rate_limits": { 00:32:14.247 "rw_ios_per_sec": 0, 00:32:14.247 "rw_mbytes_per_sec": 0, 00:32:14.247 "r_mbytes_per_sec": 0, 00:32:14.247 "w_mbytes_per_sec": 0 00:32:14.247 }, 00:32:14.247 "claimed": false, 00:32:14.247 "zoned": false, 00:32:14.247 "supported_io_types": { 00:32:14.247 "read": true, 00:32:14.247 "write": true, 00:32:14.247 "unmap": false, 00:32:14.247 "flush": false, 00:32:14.247 "reset": true, 00:32:14.247 "nvme_admin": false, 00:32:14.247 "nvme_io": false, 00:32:14.247 "nvme_io_md": false, 00:32:14.247 "write_zeroes": true, 00:32:14.247 "zcopy": false, 00:32:14.247 "get_zone_info": false, 00:32:14.247 "zone_management": false, 00:32:14.247 "zone_append": false, 00:32:14.247 "compare": false, 00:32:14.247 "compare_and_write": false, 00:32:14.247 "abort": false, 00:32:14.247 "seek_hole": false, 00:32:14.247 "seek_data": false, 00:32:14.247 "copy": false, 00:32:14.247 "nvme_iov_md": false 00:32:14.247 }, 00:32:14.247 "memory_domains": [ 00:32:14.247 { 00:32:14.247 "dma_device_id": "system", 00:32:14.247 "dma_device_type": 1 00:32:14.247 }, 00:32:14.247 { 00:32:14.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.247 "dma_device_type": 2 00:32:14.247 }, 00:32:14.247 { 00:32:14.247 "dma_device_id": "system", 00:32:14.247 "dma_device_type": 1 00:32:14.247 }, 00:32:14.247 { 00:32:14.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.247 "dma_device_type": 2 00:32:14.247 } 00:32:14.247 ], 00:32:14.247 "driver_specific": { 00:32:14.247 "raid": { 00:32:14.247 "uuid": "32a95a2d-6750-4446-90e5-6ec1f59aa000", 00:32:14.247 "strip_size_kb": 0, 00:32:14.247 "state": "online", 00:32:14.247 "raid_level": "raid1", 00:32:14.247 "superblock": true, 00:32:14.247 "num_base_bdevs": 2, 00:32:14.247 "num_base_bdevs_discovered": 2, 00:32:14.247 "num_base_bdevs_operational": 2, 00:32:14.247 "base_bdevs_list": [ 00:32:14.247 { 00:32:14.247 "name": "BaseBdev1", 00:32:14.247 "uuid": "b40974ac-5784-4369-9d59-adfed8d42104", 00:32:14.247 "is_configured": true, 00:32:14.247 "data_offset": 256, 00:32:14.247 "data_size": 7936 00:32:14.247 }, 00:32:14.247 { 00:32:14.247 "name": "BaseBdev2", 00:32:14.247 "uuid": "dbda86f1-a0ea-4123-a9d9-e978325d3760", 00:32:14.247 "is_configured": true, 00:32:14.247 "data_offset": 256, 00:32:14.247 "data_size": 7936 00:32:14.247 } 00:32:14.247 ] 00:32:14.247 } 00:32:14.247 } 00:32:14.247 }' 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:14.247 BaseBdev2' 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:14.247 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:14.506 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:14.506 "name": "BaseBdev1", 00:32:14.506 "aliases": [ 00:32:14.506 "b40974ac-5784-4369-9d59-adfed8d42104" 00:32:14.506 ], 00:32:14.506 "product_name": "Malloc disk", 00:32:14.506 "block_size": 4128, 00:32:14.506 "num_blocks": 8192, 00:32:14.506 "uuid": "b40974ac-5784-4369-9d59-adfed8d42104", 00:32:14.506 "md_size": 32, 00:32:14.506 "md_interleave": true, 00:32:14.506 "dif_type": 0, 00:32:14.506 "assigned_rate_limits": { 00:32:14.506 "rw_ios_per_sec": 0, 00:32:14.506 "rw_mbytes_per_sec": 0, 00:32:14.506 "r_mbytes_per_sec": 0, 00:32:14.506 "w_mbytes_per_sec": 0 00:32:14.506 }, 00:32:14.506 "claimed": true, 00:32:14.506 "claim_type": "exclusive_write", 00:32:14.506 "zoned": false, 00:32:14.506 "supported_io_types": { 00:32:14.506 "read": true, 00:32:14.506 "write": true, 00:32:14.506 "unmap": true, 00:32:14.506 "flush": true, 00:32:14.506 "reset": true, 00:32:14.506 "nvme_admin": false, 00:32:14.506 "nvme_io": false, 00:32:14.506 "nvme_io_md": false, 00:32:14.506 "write_zeroes": true, 00:32:14.506 "zcopy": true, 00:32:14.506 "get_zone_info": false, 00:32:14.506 "zone_management": false, 00:32:14.506 "zone_append": false, 00:32:14.506 "compare": false, 00:32:14.506 "compare_and_write": false, 00:32:14.506 "abort": true, 00:32:14.506 "seek_hole": false, 00:32:14.506 "seek_data": false, 00:32:14.506 "copy": true, 00:32:14.506 "nvme_iov_md": false 00:32:14.506 }, 00:32:14.506 "memory_domains": [ 00:32:14.506 { 00:32:14.506 "dma_device_id": "system", 00:32:14.506 "dma_device_type": 1 00:32:14.506 }, 00:32:14.506 { 00:32:14.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.506 "dma_device_type": 2 00:32:14.506 } 00:32:14.506 ], 00:32:14.506 "driver_specific": {} 00:32:14.506 }' 00:32:14.506 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:14.506 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:14.765 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:14.765 02:38:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:14.765 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:14.765 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:14.765 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:14.765 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:14.765 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:14.765 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.024 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.024 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:15.024 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:15.024 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:15.024 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:15.284 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:15.284 "name": "BaseBdev2", 00:32:15.284 "aliases": [ 00:32:15.284 "dbda86f1-a0ea-4123-a9d9-e978325d3760" 00:32:15.284 ], 00:32:15.284 "product_name": "Malloc disk", 00:32:15.284 "block_size": 4128, 00:32:15.284 "num_blocks": 8192, 00:32:15.284 "uuid": "dbda86f1-a0ea-4123-a9d9-e978325d3760", 00:32:15.284 "md_size": 32, 00:32:15.284 "md_interleave": true, 00:32:15.284 "dif_type": 0, 00:32:15.284 "assigned_rate_limits": { 00:32:15.284 "rw_ios_per_sec": 0, 00:32:15.284 "rw_mbytes_per_sec": 0, 00:32:15.284 "r_mbytes_per_sec": 0, 00:32:15.284 "w_mbytes_per_sec": 0 00:32:15.284 }, 00:32:15.284 "claimed": true, 00:32:15.284 "claim_type": "exclusive_write", 00:32:15.284 "zoned": false, 00:32:15.284 "supported_io_types": { 00:32:15.284 "read": true, 00:32:15.284 "write": true, 00:32:15.284 "unmap": true, 00:32:15.284 "flush": true, 00:32:15.284 "reset": true, 00:32:15.284 "nvme_admin": false, 00:32:15.284 "nvme_io": false, 00:32:15.284 "nvme_io_md": false, 00:32:15.284 "write_zeroes": true, 00:32:15.284 "zcopy": true, 00:32:15.284 "get_zone_info": false, 00:32:15.284 "zone_management": false, 00:32:15.284 "zone_append": false, 00:32:15.284 "compare": false, 00:32:15.284 "compare_and_write": false, 00:32:15.284 "abort": true, 00:32:15.284 "seek_hole": false, 00:32:15.284 "seek_data": false, 00:32:15.284 "copy": true, 00:32:15.284 "nvme_iov_md": false 00:32:15.284 }, 00:32:15.284 "memory_domains": [ 00:32:15.284 { 00:32:15.284 "dma_device_id": "system", 00:32:15.284 "dma_device_type": 1 00:32:15.284 }, 00:32:15.284 { 00:32:15.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.284 "dma_device_type": 2 00:32:15.284 } 00:32:15.284 ], 00:32:15.284 "driver_specific": {} 00:32:15.284 }' 00:32:15.284 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.284 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.284 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:15.284 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.543 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.543 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:15.543 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.543 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.543 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:15.543 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.801 02:38:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.801 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:15.801 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:16.061 [2024-07-11 02:38:06.257351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.061 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:16.320 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:16.320 "name": "Existed_Raid", 00:32:16.320 "uuid": "32a95a2d-6750-4446-90e5-6ec1f59aa000", 00:32:16.320 "strip_size_kb": 0, 00:32:16.320 "state": "online", 00:32:16.320 "raid_level": "raid1", 00:32:16.320 "superblock": true, 00:32:16.320 "num_base_bdevs": 2, 00:32:16.320 "num_base_bdevs_discovered": 1, 00:32:16.320 "num_base_bdevs_operational": 1, 00:32:16.320 "base_bdevs_list": [ 00:32:16.320 { 00:32:16.320 "name": null, 00:32:16.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.320 "is_configured": false, 00:32:16.320 "data_offset": 256, 00:32:16.320 "data_size": 7936 00:32:16.320 }, 00:32:16.320 { 00:32:16.320 "name": "BaseBdev2", 00:32:16.320 "uuid": "dbda86f1-a0ea-4123-a9d9-e978325d3760", 00:32:16.320 "is_configured": true, 00:32:16.320 "data_offset": 256, 00:32:16.320 "data_size": 7936 00:32:16.320 } 00:32:16.320 ] 00:32:16.320 }' 00:32:16.320 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:16.320 02:38:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:17.258 02:38:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:17.258 02:38:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:17.258 02:38:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.258 02:38:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:17.258 02:38:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:17.258 02:38:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:17.258 02:38:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:17.826 [2024-07-11 02:38:08.071406] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:17.826 [2024-07-11 02:38:08.071494] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:17.826 [2024-07-11 02:38:08.084552] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:17.826 [2024-07-11 02:38:08.084588] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:17.826 [2024-07-11 02:38:08.084601] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1830290 name Existed_Raid, state offline 00:32:17.826 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:17.826 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:17.826 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.826 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:18.085 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 2057547 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2057547 ']' 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2057547 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2057547 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2057547' 00:32:18.086 killing process with pid 2057547 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2057547 00:32:18.086 [2024-07-11 02:38:08.412046] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:18.086 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2057547 00:32:18.086 [2024-07-11 02:38:08.412900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:18.346 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:32:18.346 00:32:18.346 real 0m11.756s 00:32:18.346 user 0m21.133s 00:32:18.346 sys 0m2.099s 00:32:18.346 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:18.346 02:38:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:18.346 ************************************ 00:32:18.346 END TEST raid_state_function_test_sb_md_interleaved 00:32:18.346 ************************************ 00:32:18.346 02:38:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:18.346 02:38:08 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:32:18.346 02:38:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:32:18.346 02:38:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.346 02:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:18.346 ************************************ 00:32:18.346 START TEST raid_superblock_test_md_interleaved 00:32:18.346 ************************************ 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=2059254 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 2059254 /var/tmp/spdk-raid.sock 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2059254 ']' 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:18.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.346 02:38:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:18.346 [2024-07-11 02:38:08.755519] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:32:18.346 [2024-07-11 02:38:08.755583] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059254 ] 00:32:18.605 [2024-07-11 02:38:08.890774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.605 [2024-07-11 02:38:08.940124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.605 [2024-07-11 02:38:09.003851] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:18.605 [2024-07-11 02:38:09.003882] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:19.549 02:38:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:32:19.808 malloc1 00:32:19.808 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:20.066 [2024-07-11 02:38:10.449875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:20.066 [2024-07-11 02:38:10.449925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.066 [2024-07-11 02:38:10.449946] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1834010 00:32:20.066 [2024-07-11 02:38:10.449965] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.066 [2024-07-11 02:38:10.451620] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.066 [2024-07-11 02:38:10.451646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:20.066 pt1 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:20.066 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:32:20.325 malloc2 00:32:20.325 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:20.584 [2024-07-11 02:38:10.900089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:20.584 [2024-07-11 02:38:10.900133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.584 [2024-07-11 02:38:10.900151] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x182a580 00:32:20.584 [2024-07-11 02:38:10.900163] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.584 [2024-07-11 02:38:10.901500] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.584 [2024-07-11 02:38:10.901526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:20.584 pt2 00:32:20.584 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:20.585 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:20.585 02:38:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:20.844 [2024-07-11 02:38:11.088600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:20.844 [2024-07-11 02:38:11.089895] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:20.844 [2024-07-11 02:38:11.090033] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16914a0 00:32:20.844 [2024-07-11 02:38:11.090046] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:20.844 [2024-07-11 02:38:11.090110] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x168ec10 00:32:20.844 [2024-07-11 02:38:11.090192] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16914a0 00:32:20.844 [2024-07-11 02:38:11.090201] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16914a0 00:32:20.844 [2024-07-11 02:38:11.090257] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.844 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.103 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.103 "name": "raid_bdev1", 00:32:21.103 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:21.103 "strip_size_kb": 0, 00:32:21.103 "state": "online", 00:32:21.103 "raid_level": "raid1", 00:32:21.103 "superblock": true, 00:32:21.103 "num_base_bdevs": 2, 00:32:21.103 "num_base_bdevs_discovered": 2, 00:32:21.103 "num_base_bdevs_operational": 2, 00:32:21.103 "base_bdevs_list": [ 00:32:21.103 { 00:32:21.103 "name": "pt1", 00:32:21.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:21.103 "is_configured": true, 00:32:21.103 "data_offset": 256, 00:32:21.103 "data_size": 7936 00:32:21.103 }, 00:32:21.103 { 00:32:21.103 "name": "pt2", 00:32:21.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:21.103 "is_configured": true, 00:32:21.103 "data_offset": 256, 00:32:21.103 "data_size": 7936 00:32:21.103 } 00:32:21.103 ] 00:32:21.103 }' 00:32:21.103 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.103 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:21.671 02:38:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:21.930 [2024-07-11 02:38:12.223848] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:21.930 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:21.930 "name": "raid_bdev1", 00:32:21.930 "aliases": [ 00:32:21.930 "5b2a9f18-81d0-440d-a761-ea97e5bed624" 00:32:21.930 ], 00:32:21.930 "product_name": "Raid Volume", 00:32:21.930 "block_size": 4128, 00:32:21.930 "num_blocks": 7936, 00:32:21.930 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:21.930 "md_size": 32, 00:32:21.930 "md_interleave": true, 00:32:21.930 "dif_type": 0, 00:32:21.930 "assigned_rate_limits": { 00:32:21.930 "rw_ios_per_sec": 0, 00:32:21.930 "rw_mbytes_per_sec": 0, 00:32:21.930 "r_mbytes_per_sec": 0, 00:32:21.930 "w_mbytes_per_sec": 0 00:32:21.930 }, 00:32:21.930 "claimed": false, 00:32:21.930 "zoned": false, 00:32:21.930 "supported_io_types": { 00:32:21.930 "read": true, 00:32:21.930 "write": true, 00:32:21.930 "unmap": false, 00:32:21.930 "flush": false, 00:32:21.930 "reset": true, 00:32:21.930 "nvme_admin": false, 00:32:21.930 "nvme_io": false, 00:32:21.930 "nvme_io_md": false, 00:32:21.930 "write_zeroes": true, 00:32:21.930 "zcopy": false, 00:32:21.930 "get_zone_info": false, 00:32:21.930 "zone_management": false, 00:32:21.930 "zone_append": false, 00:32:21.930 "compare": false, 00:32:21.930 "compare_and_write": false, 00:32:21.930 "abort": false, 00:32:21.930 "seek_hole": false, 00:32:21.930 "seek_data": false, 00:32:21.930 "copy": false, 00:32:21.930 "nvme_iov_md": false 00:32:21.930 }, 00:32:21.930 "memory_domains": [ 00:32:21.930 { 00:32:21.930 "dma_device_id": "system", 00:32:21.930 "dma_device_type": 1 00:32:21.930 }, 00:32:21.930 { 00:32:21.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.930 "dma_device_type": 2 00:32:21.930 }, 00:32:21.930 { 00:32:21.930 "dma_device_id": "system", 00:32:21.930 "dma_device_type": 1 00:32:21.930 }, 00:32:21.930 { 00:32:21.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.930 "dma_device_type": 2 00:32:21.930 } 00:32:21.930 ], 00:32:21.930 "driver_specific": { 00:32:21.930 "raid": { 00:32:21.930 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:21.930 "strip_size_kb": 0, 00:32:21.930 "state": "online", 00:32:21.930 "raid_level": "raid1", 00:32:21.930 "superblock": true, 00:32:21.930 "num_base_bdevs": 2, 00:32:21.930 "num_base_bdevs_discovered": 2, 00:32:21.930 "num_base_bdevs_operational": 2, 00:32:21.930 "base_bdevs_list": [ 00:32:21.930 { 00:32:21.930 "name": "pt1", 00:32:21.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:21.930 "is_configured": true, 00:32:21.930 "data_offset": 256, 00:32:21.930 "data_size": 7936 00:32:21.930 }, 00:32:21.930 { 00:32:21.930 "name": "pt2", 00:32:21.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:21.930 "is_configured": true, 00:32:21.930 "data_offset": 256, 00:32:21.930 "data_size": 7936 00:32:21.930 } 00:32:21.930 ] 00:32:21.930 } 00:32:21.930 } 00:32:21.930 }' 00:32:21.930 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:21.930 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:21.930 pt2' 00:32:21.930 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:21.930 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:21.930 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.189 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.189 "name": "pt1", 00:32:22.189 "aliases": [ 00:32:22.189 "00000000-0000-0000-0000-000000000001" 00:32:22.189 ], 00:32:22.189 "product_name": "passthru", 00:32:22.189 "block_size": 4128, 00:32:22.189 "num_blocks": 8192, 00:32:22.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:22.189 "md_size": 32, 00:32:22.189 "md_interleave": true, 00:32:22.189 "dif_type": 0, 00:32:22.189 "assigned_rate_limits": { 00:32:22.189 "rw_ios_per_sec": 0, 00:32:22.189 "rw_mbytes_per_sec": 0, 00:32:22.189 "r_mbytes_per_sec": 0, 00:32:22.189 "w_mbytes_per_sec": 0 00:32:22.189 }, 00:32:22.189 "claimed": true, 00:32:22.189 "claim_type": "exclusive_write", 00:32:22.189 "zoned": false, 00:32:22.189 "supported_io_types": { 00:32:22.189 "read": true, 00:32:22.189 "write": true, 00:32:22.189 "unmap": true, 00:32:22.189 "flush": true, 00:32:22.189 "reset": true, 00:32:22.189 "nvme_admin": false, 00:32:22.189 "nvme_io": false, 00:32:22.189 "nvme_io_md": false, 00:32:22.189 "write_zeroes": true, 00:32:22.189 "zcopy": true, 00:32:22.189 "get_zone_info": false, 00:32:22.189 "zone_management": false, 00:32:22.189 "zone_append": false, 00:32:22.189 "compare": false, 00:32:22.189 "compare_and_write": false, 00:32:22.189 "abort": true, 00:32:22.189 "seek_hole": false, 00:32:22.189 "seek_data": false, 00:32:22.189 "copy": true, 00:32:22.189 "nvme_iov_md": false 00:32:22.189 }, 00:32:22.189 "memory_domains": [ 00:32:22.189 { 00:32:22.189 "dma_device_id": "system", 00:32:22.189 "dma_device_type": 1 00:32:22.189 }, 00:32:22.190 { 00:32:22.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.190 "dma_device_type": 2 00:32:22.190 } 00:32:22.190 ], 00:32:22.190 "driver_specific": { 00:32:22.190 "passthru": { 00:32:22.190 "name": "pt1", 00:32:22.190 "base_bdev_name": "malloc1" 00:32:22.190 } 00:32:22.190 } 00:32:22.190 }' 00:32:22.190 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.190 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.449 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.708 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:22.708 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:22.708 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:22.708 02:38:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.967 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.967 "name": "pt2", 00:32:22.967 "aliases": [ 00:32:22.967 "00000000-0000-0000-0000-000000000002" 00:32:22.967 ], 00:32:22.967 "product_name": "passthru", 00:32:22.967 "block_size": 4128, 00:32:22.967 "num_blocks": 8192, 00:32:22.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:22.967 "md_size": 32, 00:32:22.967 "md_interleave": true, 00:32:22.967 "dif_type": 0, 00:32:22.967 "assigned_rate_limits": { 00:32:22.967 "rw_ios_per_sec": 0, 00:32:22.967 "rw_mbytes_per_sec": 0, 00:32:22.967 "r_mbytes_per_sec": 0, 00:32:22.967 "w_mbytes_per_sec": 0 00:32:22.967 }, 00:32:22.967 "claimed": true, 00:32:22.967 "claim_type": "exclusive_write", 00:32:22.967 "zoned": false, 00:32:22.967 "supported_io_types": { 00:32:22.967 "read": true, 00:32:22.967 "write": true, 00:32:22.967 "unmap": true, 00:32:22.967 "flush": true, 00:32:22.967 "reset": true, 00:32:22.967 "nvme_admin": false, 00:32:22.967 "nvme_io": false, 00:32:22.967 "nvme_io_md": false, 00:32:22.967 "write_zeroes": true, 00:32:22.967 "zcopy": true, 00:32:22.967 "get_zone_info": false, 00:32:22.967 "zone_management": false, 00:32:22.967 "zone_append": false, 00:32:22.967 "compare": false, 00:32:22.967 "compare_and_write": false, 00:32:22.967 "abort": true, 00:32:22.967 "seek_hole": false, 00:32:22.967 "seek_data": false, 00:32:22.967 "copy": true, 00:32:22.967 "nvme_iov_md": false 00:32:22.967 }, 00:32:22.967 "memory_domains": [ 00:32:22.967 { 00:32:22.967 "dma_device_id": "system", 00:32:22.967 "dma_device_type": 1 00:32:22.967 }, 00:32:22.967 { 00:32:22.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.967 "dma_device_type": 2 00:32:22.967 } 00:32:22.967 ], 00:32:22.967 "driver_specific": { 00:32:22.967 "passthru": { 00:32:22.967 "name": "pt2", 00:32:22.967 "base_bdev_name": "malloc2" 00:32:22.967 } 00:32:22.968 } 00:32:22.968 }' 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.968 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.227 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:23.227 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.227 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.227 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:23.227 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:23.227 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:32:23.485 [2024-07-11 02:38:13.703837] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:23.485 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=5b2a9f18-81d0-440d-a761-ea97e5bed624 00:32:23.485 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 5b2a9f18-81d0-440d-a761-ea97e5bed624 ']' 00:32:23.485 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:23.744 [2024-07-11 02:38:13.952219] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:23.744 [2024-07-11 02:38:13.952244] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:23.744 [2024-07-11 02:38:13.952303] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:23.744 [2024-07-11 02:38:13.952360] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:23.744 [2024-07-11 02:38:13.952372] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16914a0 name raid_bdev1, state offline 00:32:23.744 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.744 02:38:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:32:24.003 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:32:24.003 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:32:24.003 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:24.004 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:24.263 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:24.263 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:24.522 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:24.522 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:24.782 02:38:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:24.782 [2024-07-11 02:38:15.187433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:24.782 [2024-07-11 02:38:15.188815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:24.782 [2024-07-11 02:38:15.188870] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:24.782 [2024-07-11 02:38:15.188909] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:24.782 [2024-07-11 02:38:15.188928] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:24.782 [2024-07-11 02:38:15.188937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x168ece0 name raid_bdev1, state configuring 00:32:24.782 request: 00:32:24.782 { 00:32:24.782 "name": "raid_bdev1", 00:32:24.782 "raid_level": "raid1", 00:32:24.782 "base_bdevs": [ 00:32:24.782 "malloc1", 00:32:24.782 "malloc2" 00:32:24.782 ], 00:32:24.782 "superblock": false, 00:32:24.782 "method": "bdev_raid_create", 00:32:24.782 "req_id": 1 00:32:24.782 } 00:32:24.782 Got JSON-RPC error response 00:32:24.782 response: 00:32:24.782 { 00:32:24.782 "code": -17, 00:32:24.782 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:24.782 } 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:32:25.042 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:25.301 [2024-07-11 02:38:15.676665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:25.301 [2024-07-11 02:38:15.676712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:25.301 [2024-07-11 02:38:15.676730] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168e8f0 00:32:25.301 [2024-07-11 02:38:15.676743] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:25.301 [2024-07-11 02:38:15.678127] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:25.301 [2024-07-11 02:38:15.678155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:25.301 [2024-07-11 02:38:15.678200] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:25.301 [2024-07-11 02:38:15.678226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:25.301 pt1 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:25.301 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.302 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.561 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:25.561 "name": "raid_bdev1", 00:32:25.561 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:25.561 "strip_size_kb": 0, 00:32:25.561 "state": "configuring", 00:32:25.561 "raid_level": "raid1", 00:32:25.561 "superblock": true, 00:32:25.561 "num_base_bdevs": 2, 00:32:25.561 "num_base_bdevs_discovered": 1, 00:32:25.561 "num_base_bdevs_operational": 2, 00:32:25.561 "base_bdevs_list": [ 00:32:25.561 { 00:32:25.561 "name": "pt1", 00:32:25.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:25.561 "is_configured": true, 00:32:25.561 "data_offset": 256, 00:32:25.561 "data_size": 7936 00:32:25.561 }, 00:32:25.561 { 00:32:25.562 "name": null, 00:32:25.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:25.562 "is_configured": false, 00:32:25.562 "data_offset": 256, 00:32:25.562 "data_size": 7936 00:32:25.562 } 00:32:25.562 ] 00:32:25.562 }' 00:32:25.562 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:25.562 02:38:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:26.130 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:32:26.130 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:32:26.130 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:26.130 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:26.390 [2024-07-11 02:38:16.699522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:26.390 [2024-07-11 02:38:16.699570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:26.390 [2024-07-11 02:38:16.699589] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x168f4b0 00:32:26.390 [2024-07-11 02:38:16.699601] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:26.390 [2024-07-11 02:38:16.699755] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:26.390 [2024-07-11 02:38:16.699777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:26.390 [2024-07-11 02:38:16.699820] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:26.390 [2024-07-11 02:38:16.699839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:26.390 [2024-07-11 02:38:16.699920] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1693900 00:32:26.390 [2024-07-11 02:38:16.699930] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:26.390 [2024-07-11 02:38:16.699983] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1695110 00:32:26.390 [2024-07-11 02:38:16.700055] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1693900 00:32:26.390 [2024-07-11 02:38:16.700065] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1693900 00:32:26.390 [2024-07-11 02:38:16.700120] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:26.390 pt2 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.390 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.648 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:26.648 "name": "raid_bdev1", 00:32:26.648 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:26.648 "strip_size_kb": 0, 00:32:26.648 "state": "online", 00:32:26.648 "raid_level": "raid1", 00:32:26.648 "superblock": true, 00:32:26.648 "num_base_bdevs": 2, 00:32:26.648 "num_base_bdevs_discovered": 2, 00:32:26.648 "num_base_bdevs_operational": 2, 00:32:26.648 "base_bdevs_list": [ 00:32:26.648 { 00:32:26.648 "name": "pt1", 00:32:26.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:26.648 "is_configured": true, 00:32:26.648 "data_offset": 256, 00:32:26.648 "data_size": 7936 00:32:26.648 }, 00:32:26.648 { 00:32:26.648 "name": "pt2", 00:32:26.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:26.648 "is_configured": true, 00:32:26.648 "data_offset": 256, 00:32:26.648 "data_size": 7936 00:32:26.648 } 00:32:26.648 ] 00:32:26.648 }' 00:32:26.648 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:26.648 02:38:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:27.214 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:27.472 [2024-07-11 02:38:17.734500] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:27.472 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:27.472 "name": "raid_bdev1", 00:32:27.472 "aliases": [ 00:32:27.472 "5b2a9f18-81d0-440d-a761-ea97e5bed624" 00:32:27.472 ], 00:32:27.472 "product_name": "Raid Volume", 00:32:27.472 "block_size": 4128, 00:32:27.472 "num_blocks": 7936, 00:32:27.472 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:27.472 "md_size": 32, 00:32:27.472 "md_interleave": true, 00:32:27.472 "dif_type": 0, 00:32:27.472 "assigned_rate_limits": { 00:32:27.472 "rw_ios_per_sec": 0, 00:32:27.472 "rw_mbytes_per_sec": 0, 00:32:27.472 "r_mbytes_per_sec": 0, 00:32:27.472 "w_mbytes_per_sec": 0 00:32:27.472 }, 00:32:27.472 "claimed": false, 00:32:27.472 "zoned": false, 00:32:27.472 "supported_io_types": { 00:32:27.472 "read": true, 00:32:27.472 "write": true, 00:32:27.472 "unmap": false, 00:32:27.472 "flush": false, 00:32:27.472 "reset": true, 00:32:27.472 "nvme_admin": false, 00:32:27.472 "nvme_io": false, 00:32:27.472 "nvme_io_md": false, 00:32:27.472 "write_zeroes": true, 00:32:27.472 "zcopy": false, 00:32:27.472 "get_zone_info": false, 00:32:27.472 "zone_management": false, 00:32:27.472 "zone_append": false, 00:32:27.472 "compare": false, 00:32:27.472 "compare_and_write": false, 00:32:27.472 "abort": false, 00:32:27.472 "seek_hole": false, 00:32:27.472 "seek_data": false, 00:32:27.472 "copy": false, 00:32:27.472 "nvme_iov_md": false 00:32:27.472 }, 00:32:27.472 "memory_domains": [ 00:32:27.472 { 00:32:27.472 "dma_device_id": "system", 00:32:27.472 "dma_device_type": 1 00:32:27.472 }, 00:32:27.472 { 00:32:27.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.472 "dma_device_type": 2 00:32:27.472 }, 00:32:27.472 { 00:32:27.472 "dma_device_id": "system", 00:32:27.472 "dma_device_type": 1 00:32:27.472 }, 00:32:27.472 { 00:32:27.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.472 "dma_device_type": 2 00:32:27.472 } 00:32:27.472 ], 00:32:27.472 "driver_specific": { 00:32:27.472 "raid": { 00:32:27.472 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:27.472 "strip_size_kb": 0, 00:32:27.472 "state": "online", 00:32:27.472 "raid_level": "raid1", 00:32:27.472 "superblock": true, 00:32:27.472 "num_base_bdevs": 2, 00:32:27.472 "num_base_bdevs_discovered": 2, 00:32:27.472 "num_base_bdevs_operational": 2, 00:32:27.472 "base_bdevs_list": [ 00:32:27.472 { 00:32:27.472 "name": "pt1", 00:32:27.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:27.472 "is_configured": true, 00:32:27.472 "data_offset": 256, 00:32:27.472 "data_size": 7936 00:32:27.472 }, 00:32:27.472 { 00:32:27.472 "name": "pt2", 00:32:27.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:27.472 "is_configured": true, 00:32:27.473 "data_offset": 256, 00:32:27.473 "data_size": 7936 00:32:27.473 } 00:32:27.473 ] 00:32:27.473 } 00:32:27.473 } 00:32:27.473 }' 00:32:27.473 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:27.473 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:27.473 pt2' 00:32:27.473 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:27.473 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:27.473 02:38:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:27.731 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:27.731 "name": "pt1", 00:32:27.731 "aliases": [ 00:32:27.731 "00000000-0000-0000-0000-000000000001" 00:32:27.731 ], 00:32:27.731 "product_name": "passthru", 00:32:27.731 "block_size": 4128, 00:32:27.731 "num_blocks": 8192, 00:32:27.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:27.731 "md_size": 32, 00:32:27.731 "md_interleave": true, 00:32:27.731 "dif_type": 0, 00:32:27.731 "assigned_rate_limits": { 00:32:27.731 "rw_ios_per_sec": 0, 00:32:27.731 "rw_mbytes_per_sec": 0, 00:32:27.731 "r_mbytes_per_sec": 0, 00:32:27.731 "w_mbytes_per_sec": 0 00:32:27.731 }, 00:32:27.731 "claimed": true, 00:32:27.731 "claim_type": "exclusive_write", 00:32:27.731 "zoned": false, 00:32:27.731 "supported_io_types": { 00:32:27.731 "read": true, 00:32:27.731 "write": true, 00:32:27.731 "unmap": true, 00:32:27.731 "flush": true, 00:32:27.731 "reset": true, 00:32:27.731 "nvme_admin": false, 00:32:27.731 "nvme_io": false, 00:32:27.731 "nvme_io_md": false, 00:32:27.731 "write_zeroes": true, 00:32:27.731 "zcopy": true, 00:32:27.731 "get_zone_info": false, 00:32:27.731 "zone_management": false, 00:32:27.731 "zone_append": false, 00:32:27.731 "compare": false, 00:32:27.731 "compare_and_write": false, 00:32:27.731 "abort": true, 00:32:27.731 "seek_hole": false, 00:32:27.731 "seek_data": false, 00:32:27.731 "copy": true, 00:32:27.731 "nvme_iov_md": false 00:32:27.731 }, 00:32:27.731 "memory_domains": [ 00:32:27.731 { 00:32:27.731 "dma_device_id": "system", 00:32:27.731 "dma_device_type": 1 00:32:27.731 }, 00:32:27.731 { 00:32:27.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.731 "dma_device_type": 2 00:32:27.731 } 00:32:27.731 ], 00:32:27.731 "driver_specific": { 00:32:27.731 "passthru": { 00:32:27.731 "name": "pt1", 00:32:27.731 "base_bdev_name": "malloc1" 00:32:27.731 } 00:32:27.731 } 00:32:27.731 }' 00:32:27.731 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:27.731 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:27.731 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:27.731 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:27.990 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:28.248 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:28.248 "name": "pt2", 00:32:28.248 "aliases": [ 00:32:28.248 "00000000-0000-0000-0000-000000000002" 00:32:28.248 ], 00:32:28.248 "product_name": "passthru", 00:32:28.248 "block_size": 4128, 00:32:28.248 "num_blocks": 8192, 00:32:28.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:28.248 "md_size": 32, 00:32:28.248 "md_interleave": true, 00:32:28.248 "dif_type": 0, 00:32:28.248 "assigned_rate_limits": { 00:32:28.248 "rw_ios_per_sec": 0, 00:32:28.248 "rw_mbytes_per_sec": 0, 00:32:28.248 "r_mbytes_per_sec": 0, 00:32:28.248 "w_mbytes_per_sec": 0 00:32:28.248 }, 00:32:28.248 "claimed": true, 00:32:28.248 "claim_type": "exclusive_write", 00:32:28.248 "zoned": false, 00:32:28.248 "supported_io_types": { 00:32:28.248 "read": true, 00:32:28.248 "write": true, 00:32:28.248 "unmap": true, 00:32:28.248 "flush": true, 00:32:28.248 "reset": true, 00:32:28.249 "nvme_admin": false, 00:32:28.249 "nvme_io": false, 00:32:28.249 "nvme_io_md": false, 00:32:28.249 "write_zeroes": true, 00:32:28.249 "zcopy": true, 00:32:28.249 "get_zone_info": false, 00:32:28.249 "zone_management": false, 00:32:28.249 "zone_append": false, 00:32:28.249 "compare": false, 00:32:28.249 "compare_and_write": false, 00:32:28.249 "abort": true, 00:32:28.249 "seek_hole": false, 00:32:28.249 "seek_data": false, 00:32:28.249 "copy": true, 00:32:28.249 "nvme_iov_md": false 00:32:28.249 }, 00:32:28.249 "memory_domains": [ 00:32:28.249 { 00:32:28.249 "dma_device_id": "system", 00:32:28.249 "dma_device_type": 1 00:32:28.249 }, 00:32:28.249 { 00:32:28.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.249 "dma_device_type": 2 00:32:28.249 } 00:32:28.249 ], 00:32:28.249 "driver_specific": { 00:32:28.249 "passthru": { 00:32:28.249 "name": "pt2", 00:32:28.249 "base_bdev_name": "malloc2" 00:32:28.249 } 00:32:28.249 } 00:32:28.249 }' 00:32:28.249 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:28.507 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:28.765 02:38:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:28.766 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:28.766 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:28.766 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:32:29.024 [2024-07-11 02:38:19.230486] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:29.024 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 5b2a9f18-81d0-440d-a761-ea97e5bed624 '!=' 5b2a9f18-81d0-440d-a761-ea97e5bed624 ']' 00:32:29.024 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:32:29.024 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:29.024 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:32:29.024 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:29.284 [2024-07-11 02:38:19.486928] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.284 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.543 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.543 "name": "raid_bdev1", 00:32:29.543 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:29.543 "strip_size_kb": 0, 00:32:29.543 "state": "online", 00:32:29.543 "raid_level": "raid1", 00:32:29.543 "superblock": true, 00:32:29.543 "num_base_bdevs": 2, 00:32:29.543 "num_base_bdevs_discovered": 1, 00:32:29.543 "num_base_bdevs_operational": 1, 00:32:29.543 "base_bdevs_list": [ 00:32:29.543 { 00:32:29.543 "name": null, 00:32:29.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.543 "is_configured": false, 00:32:29.543 "data_offset": 256, 00:32:29.543 "data_size": 7936 00:32:29.543 }, 00:32:29.543 { 00:32:29.543 "name": "pt2", 00:32:29.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:29.543 "is_configured": true, 00:32:29.543 "data_offset": 256, 00:32:29.543 "data_size": 7936 00:32:29.543 } 00:32:29.543 ] 00:32:29.543 }' 00:32:29.543 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.543 02:38:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:30.111 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:30.370 [2024-07-11 02:38:20.537700] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:30.370 [2024-07-11 02:38:20.537726] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:30.370 [2024-07-11 02:38:20.537787] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:30.370 [2024-07-11 02:38:20.537834] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:30.370 [2024-07-11 02:38:20.537847] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1693900 name raid_bdev1, state offline 00:32:30.370 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.370 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:32:30.370 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:32:30.370 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:32:30.370 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:32:30.370 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:30.370 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:30.630 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:32:30.630 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:30.630 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:32:30.630 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:32:30.630 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:32:30.630 02:38:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:30.890 [2024-07-11 02:38:21.075086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:30.890 [2024-07-11 02:38:21.075128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.890 [2024-07-11 02:38:21.075144] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1692650 00:32:30.890 [2024-07-11 02:38:21.075156] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.890 [2024-07-11 02:38:21.076507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.890 [2024-07-11 02:38:21.076532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:30.890 [2024-07-11 02:38:21.076576] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:30.890 [2024-07-11 02:38:21.076600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:30.890 [2024-07-11 02:38:21.076661] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16945d0 00:32:30.890 [2024-07-11 02:38:21.076671] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:30.890 [2024-07-11 02:38:21.076725] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x168f980 00:32:30.890 [2024-07-11 02:38:21.076807] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16945d0 00:32:30.890 [2024-07-11 02:38:21.076817] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16945d0 00:32:30.890 [2024-07-11 02:38:21.076870] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.890 pt2 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.890 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.459 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:31.459 "name": "raid_bdev1", 00:32:31.459 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:31.459 "strip_size_kb": 0, 00:32:31.459 "state": "online", 00:32:31.459 "raid_level": "raid1", 00:32:31.459 "superblock": true, 00:32:31.459 "num_base_bdevs": 2, 00:32:31.459 "num_base_bdevs_discovered": 1, 00:32:31.459 "num_base_bdevs_operational": 1, 00:32:31.459 "base_bdevs_list": [ 00:32:31.459 { 00:32:31.459 "name": null, 00:32:31.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.459 "is_configured": false, 00:32:31.459 "data_offset": 256, 00:32:31.459 "data_size": 7936 00:32:31.459 }, 00:32:31.459 { 00:32:31.459 "name": "pt2", 00:32:31.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:31.459 "is_configured": true, 00:32:31.459 "data_offset": 256, 00:32:31.459 "data_size": 7936 00:32:31.459 } 00:32:31.459 ] 00:32:31.459 }' 00:32:31.459 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:31.459 02:38:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:32.027 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:32.027 [2024-07-11 02:38:22.438881] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:32.027 [2024-07-11 02:38:22.438907] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:32.027 [2024-07-11 02:38:22.438960] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:32.027 [2024-07-11 02:38:22.439004] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:32.027 [2024-07-11 02:38:22.439016] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16945d0 name raid_bdev1, state offline 00:32:32.287 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.287 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:32:32.287 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:32:32.287 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:32:32.287 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:32:32.287 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:32.546 [2024-07-11 02:38:22.932172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:32.546 [2024-07-11 02:38:22.932216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:32.546 [2024-07-11 02:38:22.932233] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1690c00 00:32:32.546 [2024-07-11 02:38:22.932245] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:32.546 [2024-07-11 02:38:22.933609] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:32.546 [2024-07-11 02:38:22.933636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:32.546 [2024-07-11 02:38:22.933679] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:32.546 [2024-07-11 02:38:22.933705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:32.546 [2024-07-11 02:38:22.933788] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:32.546 [2024-07-11 02:38:22.933802] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:32.546 [2024-07-11 02:38:22.933816] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16934f0 name raid_bdev1, state configuring 00:32:32.546 [2024-07-11 02:38:22.933843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:32.546 [2024-07-11 02:38:22.933893] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1697330 00:32:32.546 [2024-07-11 02:38:22.933903] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:32.546 [2024-07-11 02:38:22.933959] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1693300 00:32:32.546 [2024-07-11 02:38:22.934031] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1697330 00:32:32.546 [2024-07-11 02:38:22.934040] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1697330 00:32:32.546 [2024-07-11 02:38:22.934100] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:32.546 pt1 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.546 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.805 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.805 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.805 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.805 02:38:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.805 02:38:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:32.805 "name": "raid_bdev1", 00:32:32.805 "uuid": "5b2a9f18-81d0-440d-a761-ea97e5bed624", 00:32:32.805 "strip_size_kb": 0, 00:32:32.805 "state": "online", 00:32:32.805 "raid_level": "raid1", 00:32:32.805 "superblock": true, 00:32:32.805 "num_base_bdevs": 2, 00:32:32.805 "num_base_bdevs_discovered": 1, 00:32:32.805 "num_base_bdevs_operational": 1, 00:32:32.805 "base_bdevs_list": [ 00:32:32.805 { 00:32:32.805 "name": null, 00:32:32.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.805 "is_configured": false, 00:32:32.805 "data_offset": 256, 00:32:32.805 "data_size": 7936 00:32:32.805 }, 00:32:32.805 { 00:32:32.805 "name": "pt2", 00:32:32.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:32.805 "is_configured": true, 00:32:32.805 "data_offset": 256, 00:32:32.805 "data_size": 7936 00:32:32.805 } 00:32:32.805 ] 00:32:32.805 }' 00:32:32.805 02:38:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:32.805 02:38:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:33.741 02:38:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:33.741 02:38:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:33.741 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:32:33.741 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:33.741 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:32:34.000 [2024-07-11 02:38:24.320107] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 5b2a9f18-81d0-440d-a761-ea97e5bed624 '!=' 5b2a9f18-81d0-440d-a761-ea97e5bed624 ']' 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 2059254 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2059254 ']' 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2059254 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2059254 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2059254' 00:32:34.000 killing process with pid 2059254 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 2059254 00:32:34.000 [2024-07-11 02:38:24.391607] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:34.000 [2024-07-11 02:38:24.391659] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:34.000 [2024-07-11 02:38:24.391704] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:34.000 [2024-07-11 02:38:24.391717] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1697330 name raid_bdev1, state offline 00:32:34.000 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 2059254 00:32:34.000 [2024-07-11 02:38:24.408350] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:34.259 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:32:34.259 00:32:34.259 real 0m15.899s 00:32:34.259 user 0m28.791s 00:32:34.259 sys 0m3.003s 00:32:34.259 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:34.259 02:38:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:34.259 ************************************ 00:32:34.259 END TEST raid_superblock_test_md_interleaved 00:32:34.259 ************************************ 00:32:34.259 02:38:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:34.259 02:38:24 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:32:34.259 02:38:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:34.259 02:38:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.259 02:38:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:34.259 ************************************ 00:32:34.259 START TEST raid_rebuild_test_sb_md_interleaved 00:32:34.259 ************************************ 00:32:34.259 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:32:34.259 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:34.259 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:32:34.259 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:32:34.259 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:34.259 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=2061596 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 2061596 /var/tmp/spdk-raid.sock 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2061596 ']' 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:34.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:34.519 02:38:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:34.519 [2024-07-11 02:38:24.749714] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:32:34.519 [2024-07-11 02:38:24.749787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061596 ] 00:32:34.519 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:34.519 Zero copy mechanism will not be used. 00:32:34.519 [2024-07-11 02:38:24.886301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.519 [2024-07-11 02:38:24.937078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.779 [2024-07-11 02:38:25.004907] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:34.779 [2024-07-11 02:38:25.004936] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:35.348 02:38:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:35.348 02:38:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:32:35.348 02:38:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:35.348 02:38:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:32:35.608 BaseBdev1_malloc 00:32:35.608 02:38:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:35.869 [2024-07-11 02:38:26.152754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:35.869 [2024-07-11 02:38:26.152807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.869 [2024-07-11 02:38:26.152833] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x244b110 00:32:35.869 [2024-07-11 02:38:26.152845] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.869 [2024-07-11 02:38:26.154350] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.869 [2024-07-11 02:38:26.154377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:35.869 BaseBdev1 00:32:35.869 02:38:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:35.869 02:38:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:32:36.176 BaseBdev2_malloc 00:32:36.176 02:38:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:36.477 [2024-07-11 02:38:26.651056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:36.477 [2024-07-11 02:38:26.651103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:36.477 [2024-07-11 02:38:26.651124] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2441ba0 00:32:36.477 [2024-07-11 02:38:26.651137] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:36.477 [2024-07-11 02:38:26.652556] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:36.477 [2024-07-11 02:38:26.652583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:36.477 BaseBdev2 00:32:36.477 02:38:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:32:36.737 spare_malloc 00:32:36.737 02:38:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:36.737 spare_delay 00:32:36.996 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:36.996 [2024-07-11 02:38:27.393846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:36.996 [2024-07-11 02:38:27.393890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:36.996 [2024-07-11 02:38:27.393915] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22aac80 00:32:36.996 [2024-07-11 02:38:27.393928] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:36.996 [2024-07-11 02:38:27.395340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:36.996 [2024-07-11 02:38:27.395366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:36.996 spare 00:32:36.996 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:37.255 [2024-07-11 02:38:27.638519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:37.255 [2024-07-11 02:38:27.639835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:37.255 [2024-07-11 02:38:27.639996] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22ac3e0 00:32:37.255 [2024-07-11 02:38:27.640010] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:37.255 [2024-07-11 02:38:27.640085] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22ae300 00:32:37.255 [2024-07-11 02:38:27.640168] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22ac3e0 00:32:37.255 [2024-07-11 02:38:27.640178] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22ac3e0 00:32:37.255 [2024-07-11 02:38:27.640237] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.255 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.515 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:37.515 "name": "raid_bdev1", 00:32:37.515 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:37.515 "strip_size_kb": 0, 00:32:37.515 "state": "online", 00:32:37.515 "raid_level": "raid1", 00:32:37.515 "superblock": true, 00:32:37.515 "num_base_bdevs": 2, 00:32:37.515 "num_base_bdevs_discovered": 2, 00:32:37.515 "num_base_bdevs_operational": 2, 00:32:37.515 "base_bdevs_list": [ 00:32:37.515 { 00:32:37.515 "name": "BaseBdev1", 00:32:37.515 "uuid": "53925794-04ac-5356-b1f9-d12cea512847", 00:32:37.515 "is_configured": true, 00:32:37.515 "data_offset": 256, 00:32:37.515 "data_size": 7936 00:32:37.515 }, 00:32:37.515 { 00:32:37.515 "name": "BaseBdev2", 00:32:37.515 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:37.515 "is_configured": true, 00:32:37.515 "data_offset": 256, 00:32:37.515 "data_size": 7936 00:32:37.515 } 00:32:37.515 ] 00:32:37.515 }' 00:32:37.515 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:37.515 02:38:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:38.453 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:38.453 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:38.453 [2024-07-11 02:38:28.669470] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:38.453 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:32:38.453 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.453 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:38.712 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:32:38.712 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:38.712 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:32:38.712 02:38:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:38.971 [2024-07-11 02:38:29.166530] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.971 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.230 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:39.230 "name": "raid_bdev1", 00:32:39.230 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:39.230 "strip_size_kb": 0, 00:32:39.230 "state": "online", 00:32:39.230 "raid_level": "raid1", 00:32:39.230 "superblock": true, 00:32:39.230 "num_base_bdevs": 2, 00:32:39.230 "num_base_bdevs_discovered": 1, 00:32:39.230 "num_base_bdevs_operational": 1, 00:32:39.230 "base_bdevs_list": [ 00:32:39.230 { 00:32:39.230 "name": null, 00:32:39.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.230 "is_configured": false, 00:32:39.230 "data_offset": 256, 00:32:39.230 "data_size": 7936 00:32:39.230 }, 00:32:39.230 { 00:32:39.230 "name": "BaseBdev2", 00:32:39.230 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:39.230 "is_configured": true, 00:32:39.230 "data_offset": 256, 00:32:39.230 "data_size": 7936 00:32:39.230 } 00:32:39.230 ] 00:32:39.230 }' 00:32:39.230 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:39.230 02:38:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:39.798 02:38:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:39.798 [2024-07-11 02:38:30.177246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:39.798 [2024-07-11 02:38:30.180742] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22acaa0 00:32:39.798 [2024-07-11 02:38:30.182941] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:39.798 02:38:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.171 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:41.171 "name": "raid_bdev1", 00:32:41.171 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:41.171 "strip_size_kb": 0, 00:32:41.171 "state": "online", 00:32:41.171 "raid_level": "raid1", 00:32:41.171 "superblock": true, 00:32:41.171 "num_base_bdevs": 2, 00:32:41.172 "num_base_bdevs_discovered": 2, 00:32:41.172 "num_base_bdevs_operational": 2, 00:32:41.172 "process": { 00:32:41.172 "type": "rebuild", 00:32:41.172 "target": "spare", 00:32:41.172 "progress": { 00:32:41.172 "blocks": 2816, 00:32:41.172 "percent": 35 00:32:41.172 } 00:32:41.172 }, 00:32:41.172 "base_bdevs_list": [ 00:32:41.172 { 00:32:41.172 "name": "spare", 00:32:41.172 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:41.172 "is_configured": true, 00:32:41.172 "data_offset": 256, 00:32:41.172 "data_size": 7936 00:32:41.172 }, 00:32:41.172 { 00:32:41.172 "name": "BaseBdev2", 00:32:41.172 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:41.172 "is_configured": true, 00:32:41.172 "data_offset": 256, 00:32:41.172 "data_size": 7936 00:32:41.172 } 00:32:41.172 ] 00:32:41.172 }' 00:32:41.172 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:41.172 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:41.172 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:41.172 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:41.172 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:41.431 [2024-07-11 02:38:31.715774] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:41.431 [2024-07-11 02:38:31.795170] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:41.431 [2024-07-11 02:38:31.795223] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:41.431 [2024-07-11 02:38:31.795239] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:41.431 [2024-07-11 02:38:31.795248] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.431 02:38:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.689 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:41.689 "name": "raid_bdev1", 00:32:41.689 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:41.689 "strip_size_kb": 0, 00:32:41.689 "state": "online", 00:32:41.689 "raid_level": "raid1", 00:32:41.689 "superblock": true, 00:32:41.689 "num_base_bdevs": 2, 00:32:41.689 "num_base_bdevs_discovered": 1, 00:32:41.689 "num_base_bdevs_operational": 1, 00:32:41.689 "base_bdevs_list": [ 00:32:41.689 { 00:32:41.689 "name": null, 00:32:41.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.689 "is_configured": false, 00:32:41.689 "data_offset": 256, 00:32:41.689 "data_size": 7936 00:32:41.689 }, 00:32:41.689 { 00:32:41.689 "name": "BaseBdev2", 00:32:41.689 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:41.689 "is_configured": true, 00:32:41.689 "data_offset": 256, 00:32:41.689 "data_size": 7936 00:32:41.689 } 00:32:41.689 ] 00:32:41.689 }' 00:32:41.689 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:41.689 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:42.255 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:42.255 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:42.255 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:42.255 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:42.255 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:42.255 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.255 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.513 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:42.513 "name": "raid_bdev1", 00:32:42.513 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:42.513 "strip_size_kb": 0, 00:32:42.513 "state": "online", 00:32:42.513 "raid_level": "raid1", 00:32:42.513 "superblock": true, 00:32:42.513 "num_base_bdevs": 2, 00:32:42.513 "num_base_bdevs_discovered": 1, 00:32:42.513 "num_base_bdevs_operational": 1, 00:32:42.513 "base_bdevs_list": [ 00:32:42.513 { 00:32:42.513 "name": null, 00:32:42.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.513 "is_configured": false, 00:32:42.513 "data_offset": 256, 00:32:42.513 "data_size": 7936 00:32:42.513 }, 00:32:42.513 { 00:32:42.513 "name": "BaseBdev2", 00:32:42.513 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:42.513 "is_configured": true, 00:32:42.513 "data_offset": 256, 00:32:42.513 "data_size": 7936 00:32:42.513 } 00:32:42.513 ] 00:32:42.513 }' 00:32:42.513 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:42.772 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:42.772 02:38:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:42.772 02:38:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:42.772 02:38:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:43.032 [2024-07-11 02:38:33.223273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:43.032 [2024-07-11 02:38:33.226727] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22ac8d0 00:32:43.032 [2024-07-11 02:38:33.228178] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:43.032 02:38:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:43.975 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:43.975 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:43.975 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:43.975 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:43.975 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:43.975 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.975 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:44.235 "name": "raid_bdev1", 00:32:44.235 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:44.235 "strip_size_kb": 0, 00:32:44.235 "state": "online", 00:32:44.235 "raid_level": "raid1", 00:32:44.235 "superblock": true, 00:32:44.235 "num_base_bdevs": 2, 00:32:44.235 "num_base_bdevs_discovered": 2, 00:32:44.235 "num_base_bdevs_operational": 2, 00:32:44.235 "process": { 00:32:44.235 "type": "rebuild", 00:32:44.235 "target": "spare", 00:32:44.235 "progress": { 00:32:44.235 "blocks": 3072, 00:32:44.235 "percent": 38 00:32:44.235 } 00:32:44.235 }, 00:32:44.235 "base_bdevs_list": [ 00:32:44.235 { 00:32:44.235 "name": "spare", 00:32:44.235 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:44.235 "is_configured": true, 00:32:44.235 "data_offset": 256, 00:32:44.235 "data_size": 7936 00:32:44.235 }, 00:32:44.235 { 00:32:44.235 "name": "BaseBdev2", 00:32:44.235 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:44.235 "is_configured": true, 00:32:44.235 "data_offset": 256, 00:32:44.235 "data_size": 7936 00:32:44.235 } 00:32:44.235 ] 00:32:44.235 }' 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:32:44.235 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1167 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.235 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.495 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:44.495 "name": "raid_bdev1", 00:32:44.495 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:44.495 "strip_size_kb": 0, 00:32:44.495 "state": "online", 00:32:44.495 "raid_level": "raid1", 00:32:44.495 "superblock": true, 00:32:44.495 "num_base_bdevs": 2, 00:32:44.495 "num_base_bdevs_discovered": 2, 00:32:44.495 "num_base_bdevs_operational": 2, 00:32:44.495 "process": { 00:32:44.495 "type": "rebuild", 00:32:44.495 "target": "spare", 00:32:44.495 "progress": { 00:32:44.495 "blocks": 3840, 00:32:44.495 "percent": 48 00:32:44.495 } 00:32:44.495 }, 00:32:44.495 "base_bdevs_list": [ 00:32:44.495 { 00:32:44.495 "name": "spare", 00:32:44.495 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:44.495 "is_configured": true, 00:32:44.495 "data_offset": 256, 00:32:44.495 "data_size": 7936 00:32:44.495 }, 00:32:44.495 { 00:32:44.495 "name": "BaseBdev2", 00:32:44.495 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:44.495 "is_configured": true, 00:32:44.495 "data_offset": 256, 00:32:44.495 "data_size": 7936 00:32:44.495 } 00:32:44.495 ] 00:32:44.495 }' 00:32:44.495 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:44.495 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:44.495 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:44.754 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:44.754 02:38:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.690 02:38:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.949 02:38:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:45.949 "name": "raid_bdev1", 00:32:45.949 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:45.949 "strip_size_kb": 0, 00:32:45.949 "state": "online", 00:32:45.949 "raid_level": "raid1", 00:32:45.949 "superblock": true, 00:32:45.949 "num_base_bdevs": 2, 00:32:45.949 "num_base_bdevs_discovered": 2, 00:32:45.949 "num_base_bdevs_operational": 2, 00:32:45.949 "process": { 00:32:45.949 "type": "rebuild", 00:32:45.949 "target": "spare", 00:32:45.949 "progress": { 00:32:45.949 "blocks": 7168, 00:32:45.949 "percent": 90 00:32:45.949 } 00:32:45.949 }, 00:32:45.949 "base_bdevs_list": [ 00:32:45.949 { 00:32:45.949 "name": "spare", 00:32:45.949 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:45.949 "is_configured": true, 00:32:45.949 "data_offset": 256, 00:32:45.949 "data_size": 7936 00:32:45.949 }, 00:32:45.949 { 00:32:45.949 "name": "BaseBdev2", 00:32:45.949 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:45.949 "is_configured": true, 00:32:45.949 "data_offset": 256, 00:32:45.949 "data_size": 7936 00:32:45.949 } 00:32:45.949 ] 00:32:45.949 }' 00:32:45.949 02:38:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:45.949 02:38:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:45.949 02:38:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:45.949 02:38:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:45.949 02:38:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:45.949 [2024-07-11 02:38:36.351981] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:45.949 [2024-07-11 02:38:36.352037] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:45.949 [2024-07-11 02:38:36.352117] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.885 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.144 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:47.144 "name": "raid_bdev1", 00:32:47.144 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:47.144 "strip_size_kb": 0, 00:32:47.144 "state": "online", 00:32:47.144 "raid_level": "raid1", 00:32:47.144 "superblock": true, 00:32:47.144 "num_base_bdevs": 2, 00:32:47.144 "num_base_bdevs_discovered": 2, 00:32:47.144 "num_base_bdevs_operational": 2, 00:32:47.144 "base_bdevs_list": [ 00:32:47.144 { 00:32:47.144 "name": "spare", 00:32:47.144 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:47.144 "is_configured": true, 00:32:47.144 "data_offset": 256, 00:32:47.144 "data_size": 7936 00:32:47.144 }, 00:32:47.144 { 00:32:47.144 "name": "BaseBdev2", 00:32:47.144 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:47.144 "is_configured": true, 00:32:47.144 "data_offset": 256, 00:32:47.144 "data_size": 7936 00:32:47.144 } 00:32:47.144 ] 00:32:47.144 }' 00:32:47.144 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:47.144 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:47.144 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.403 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.662 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:47.662 "name": "raid_bdev1", 00:32:47.662 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:47.662 "strip_size_kb": 0, 00:32:47.662 "state": "online", 00:32:47.662 "raid_level": "raid1", 00:32:47.662 "superblock": true, 00:32:47.662 "num_base_bdevs": 2, 00:32:47.662 "num_base_bdevs_discovered": 2, 00:32:47.662 "num_base_bdevs_operational": 2, 00:32:47.662 "base_bdevs_list": [ 00:32:47.662 { 00:32:47.662 "name": "spare", 00:32:47.662 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:47.662 "is_configured": true, 00:32:47.662 "data_offset": 256, 00:32:47.662 "data_size": 7936 00:32:47.662 }, 00:32:47.662 { 00:32:47.662 "name": "BaseBdev2", 00:32:47.662 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:47.662 "is_configured": true, 00:32:47.662 "data_offset": 256, 00:32:47.662 "data_size": 7936 00:32:47.662 } 00:32:47.662 ] 00:32:47.662 }' 00:32:47.662 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:47.662 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.663 02:38:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.921 02:38:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:47.921 "name": "raid_bdev1", 00:32:47.921 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:47.921 "strip_size_kb": 0, 00:32:47.921 "state": "online", 00:32:47.921 "raid_level": "raid1", 00:32:47.921 "superblock": true, 00:32:47.921 "num_base_bdevs": 2, 00:32:47.921 "num_base_bdevs_discovered": 2, 00:32:47.921 "num_base_bdevs_operational": 2, 00:32:47.921 "base_bdevs_list": [ 00:32:47.921 { 00:32:47.921 "name": "spare", 00:32:47.921 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:47.921 "is_configured": true, 00:32:47.921 "data_offset": 256, 00:32:47.921 "data_size": 7936 00:32:47.921 }, 00:32:47.921 { 00:32:47.921 "name": "BaseBdev2", 00:32:47.921 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:47.921 "is_configured": true, 00:32:47.921 "data_offset": 256, 00:32:47.921 "data_size": 7936 00:32:47.921 } 00:32:47.921 ] 00:32:47.921 }' 00:32:47.921 02:38:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:47.921 02:38:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:48.859 02:38:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:49.118 [2024-07-11 02:38:39.352078] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:49.118 [2024-07-11 02:38:39.352103] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:49.118 [2024-07-11 02:38:39.352158] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:49.118 [2024-07-11 02:38:39.352215] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:49.118 [2024-07-11 02:38:39.352228] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22ac3e0 name raid_bdev1, state offline 00:32:49.118 02:38:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:32:49.118 02:38:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.376 02:38:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:49.376 02:38:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:32:49.376 02:38:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:49.376 02:38:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:49.943 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:50.202 [2024-07-11 02:38:40.611334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:50.202 [2024-07-11 02:38:40.611380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.202 [2024-07-11 02:38:40.611403] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22adfe0 00:32:50.202 [2024-07-11 02:38:40.611416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.202 [2024-07-11 02:38:40.613227] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.202 [2024-07-11 02:38:40.613254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:50.202 [2024-07-11 02:38:40.613312] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:50.202 [2024-07-11 02:38:40.613336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:50.202 [2024-07-11 02:38:40.613421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:50.202 spare 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.459 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.459 [2024-07-11 02:38:40.713727] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22ada80 00:32:50.459 [2024-07-11 02:38:40.713742] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:50.459 [2024-07-11 02:38:40.713845] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22ae890 00:32:50.459 [2024-07-11 02:38:40.713937] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22ada80 00:32:50.459 [2024-07-11 02:38:40.713947] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22ada80 00:32:50.459 [2024-07-11 02:38:40.714013] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:50.717 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:50.717 "name": "raid_bdev1", 00:32:50.717 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:50.717 "strip_size_kb": 0, 00:32:50.717 "state": "online", 00:32:50.717 "raid_level": "raid1", 00:32:50.717 "superblock": true, 00:32:50.717 "num_base_bdevs": 2, 00:32:50.717 "num_base_bdevs_discovered": 2, 00:32:50.717 "num_base_bdevs_operational": 2, 00:32:50.717 "base_bdevs_list": [ 00:32:50.717 { 00:32:50.717 "name": "spare", 00:32:50.717 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:50.717 "is_configured": true, 00:32:50.717 "data_offset": 256, 00:32:50.717 "data_size": 7936 00:32:50.717 }, 00:32:50.717 { 00:32:50.717 "name": "BaseBdev2", 00:32:50.717 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:50.717 "is_configured": true, 00:32:50.717 "data_offset": 256, 00:32:50.717 "data_size": 7936 00:32:50.717 } 00:32:50.717 ] 00:32:50.717 }' 00:32:50.717 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:50.717 02:38:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:51.284 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:51.284 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:51.284 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:51.284 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:51.284 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:51.284 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.284 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.543 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.543 "name": "raid_bdev1", 00:32:51.543 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:51.543 "strip_size_kb": 0, 00:32:51.543 "state": "online", 00:32:51.543 "raid_level": "raid1", 00:32:51.543 "superblock": true, 00:32:51.543 "num_base_bdevs": 2, 00:32:51.543 "num_base_bdevs_discovered": 2, 00:32:51.543 "num_base_bdevs_operational": 2, 00:32:51.543 "base_bdevs_list": [ 00:32:51.543 { 00:32:51.543 "name": "spare", 00:32:51.543 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:51.543 "is_configured": true, 00:32:51.543 "data_offset": 256, 00:32:51.543 "data_size": 7936 00:32:51.543 }, 00:32:51.543 { 00:32:51.543 "name": "BaseBdev2", 00:32:51.543 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:51.543 "is_configured": true, 00:32:51.543 "data_offset": 256, 00:32:51.543 "data_size": 7936 00:32:51.543 } 00:32:51.543 ] 00:32:51.543 }' 00:32:51.543 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:51.543 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:51.543 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:51.543 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:51.543 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.543 02:38:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:51.801 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:51.801 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:52.060 [2024-07-11 02:38:42.336035] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.060 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.319 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.319 "name": "raid_bdev1", 00:32:52.319 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:52.319 "strip_size_kb": 0, 00:32:52.319 "state": "online", 00:32:52.319 "raid_level": "raid1", 00:32:52.319 "superblock": true, 00:32:52.319 "num_base_bdevs": 2, 00:32:52.319 "num_base_bdevs_discovered": 1, 00:32:52.319 "num_base_bdevs_operational": 1, 00:32:52.319 "base_bdevs_list": [ 00:32:52.319 { 00:32:52.319 "name": null, 00:32:52.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.319 "is_configured": false, 00:32:52.319 "data_offset": 256, 00:32:52.319 "data_size": 7936 00:32:52.319 }, 00:32:52.319 { 00:32:52.319 "name": "BaseBdev2", 00:32:52.319 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:52.319 "is_configured": true, 00:32:52.319 "data_offset": 256, 00:32:52.319 "data_size": 7936 00:32:52.319 } 00:32:52.319 ] 00:32:52.319 }' 00:32:52.319 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.319 02:38:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:52.885 02:38:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:52.886 [2024-07-11 02:38:43.234450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:52.886 [2024-07-11 02:38:43.234582] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:52.886 [2024-07-11 02:38:43.234598] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:52.886 [2024-07-11 02:38:43.234625] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:52.886 [2024-07-11 02:38:43.237966] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22a5f50 00:32:52.886 [2024-07-11 02:38:43.240310] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:52.886 02:38:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:54.263 "name": "raid_bdev1", 00:32:54.263 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:54.263 "strip_size_kb": 0, 00:32:54.263 "state": "online", 00:32:54.263 "raid_level": "raid1", 00:32:54.263 "superblock": true, 00:32:54.263 "num_base_bdevs": 2, 00:32:54.263 "num_base_bdevs_discovered": 2, 00:32:54.263 "num_base_bdevs_operational": 2, 00:32:54.263 "process": { 00:32:54.263 "type": "rebuild", 00:32:54.263 "target": "spare", 00:32:54.263 "progress": { 00:32:54.263 "blocks": 3072, 00:32:54.263 "percent": 38 00:32:54.263 } 00:32:54.263 }, 00:32:54.263 "base_bdevs_list": [ 00:32:54.263 { 00:32:54.263 "name": "spare", 00:32:54.263 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:54.263 "is_configured": true, 00:32:54.263 "data_offset": 256, 00:32:54.263 "data_size": 7936 00:32:54.263 }, 00:32:54.263 { 00:32:54.263 "name": "BaseBdev2", 00:32:54.263 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:54.263 "is_configured": true, 00:32:54.263 "data_offset": 256, 00:32:54.263 "data_size": 7936 00:32:54.263 } 00:32:54.263 ] 00:32:54.263 }' 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:54.263 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:54.522 [2024-07-11 02:38:44.837887] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:54.522 [2024-07-11 02:38:44.852511] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:54.522 [2024-07-11 02:38:44.852558] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:54.522 [2024-07-11 02:38:44.852573] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:54.522 [2024-07-11 02:38:44.852582] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.522 02:38:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.781 02:38:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.781 "name": "raid_bdev1", 00:32:54.781 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:54.781 "strip_size_kb": 0, 00:32:54.781 "state": "online", 00:32:54.781 "raid_level": "raid1", 00:32:54.781 "superblock": true, 00:32:54.781 "num_base_bdevs": 2, 00:32:54.781 "num_base_bdevs_discovered": 1, 00:32:54.781 "num_base_bdevs_operational": 1, 00:32:54.781 "base_bdevs_list": [ 00:32:54.781 { 00:32:54.781 "name": null, 00:32:54.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.781 "is_configured": false, 00:32:54.781 "data_offset": 256, 00:32:54.781 "data_size": 7936 00:32:54.781 }, 00:32:54.781 { 00:32:54.781 "name": "BaseBdev2", 00:32:54.781 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:54.781 "is_configured": true, 00:32:54.781 "data_offset": 256, 00:32:54.781 "data_size": 7936 00:32:54.781 } 00:32:54.781 ] 00:32:54.781 }' 00:32:54.781 02:38:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.781 02:38:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:55.349 02:38:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:55.609 [2024-07-11 02:38:45.931124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:55.609 [2024-07-11 02:38:45.931171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.609 [2024-07-11 02:38:45.931198] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2430840 00:32:55.609 [2024-07-11 02:38:45.931211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.609 [2024-07-11 02:38:45.931389] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.609 [2024-07-11 02:38:45.931404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:55.609 [2024-07-11 02:38:45.931458] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:55.609 [2024-07-11 02:38:45.931471] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:55.609 [2024-07-11 02:38:45.931482] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:55.609 [2024-07-11 02:38:45.931499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:55.609 [2024-07-11 02:38:45.934861] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22a8d30 00:32:55.609 [2024-07-11 02:38:45.936299] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:55.609 spare 00:32:55.609 02:38:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:56.548 02:38:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:56.548 02:38:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:56.548 02:38:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:56.548 02:38:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:56.548 02:38:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:56.548 02:38:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.548 02:38:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.852 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:56.852 "name": "raid_bdev1", 00:32:56.852 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:56.852 "strip_size_kb": 0, 00:32:56.852 "state": "online", 00:32:56.852 "raid_level": "raid1", 00:32:56.852 "superblock": true, 00:32:56.852 "num_base_bdevs": 2, 00:32:56.852 "num_base_bdevs_discovered": 2, 00:32:56.852 "num_base_bdevs_operational": 2, 00:32:56.852 "process": { 00:32:56.852 "type": "rebuild", 00:32:56.852 "target": "spare", 00:32:56.852 "progress": { 00:32:56.852 "blocks": 3072, 00:32:56.852 "percent": 38 00:32:56.852 } 00:32:56.852 }, 00:32:56.852 "base_bdevs_list": [ 00:32:56.852 { 00:32:56.852 "name": "spare", 00:32:56.852 "uuid": "da9090eb-f04a-5a04-8187-8381ed618f94", 00:32:56.852 "is_configured": true, 00:32:56.852 "data_offset": 256, 00:32:56.852 "data_size": 7936 00:32:56.852 }, 00:32:56.852 { 00:32:56.852 "name": "BaseBdev2", 00:32:56.852 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:56.852 "is_configured": true, 00:32:56.852 "data_offset": 256, 00:32:56.852 "data_size": 7936 00:32:56.852 } 00:32:56.852 ] 00:32:56.852 }' 00:32:56.852 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:56.852 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:57.127 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:57.127 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:57.127 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:57.127 [2024-07-11 02:38:47.529525] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:57.127 [2024-07-11 02:38:47.548747] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:57.127 [2024-07-11 02:38:47.548797] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:57.127 [2024-07-11 02:38:47.548819] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:57.127 [2024-07-11 02:38:47.548827] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:57.386 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.387 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.646 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:57.646 "name": "raid_bdev1", 00:32:57.646 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:57.646 "strip_size_kb": 0, 00:32:57.646 "state": "online", 00:32:57.646 "raid_level": "raid1", 00:32:57.646 "superblock": true, 00:32:57.646 "num_base_bdevs": 2, 00:32:57.646 "num_base_bdevs_discovered": 1, 00:32:57.646 "num_base_bdevs_operational": 1, 00:32:57.646 "base_bdevs_list": [ 00:32:57.646 { 00:32:57.646 "name": null, 00:32:57.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.646 "is_configured": false, 00:32:57.646 "data_offset": 256, 00:32:57.646 "data_size": 7936 00:32:57.646 }, 00:32:57.646 { 00:32:57.646 "name": "BaseBdev2", 00:32:57.646 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:57.646 "is_configured": true, 00:32:57.646 "data_offset": 256, 00:32:57.646 "data_size": 7936 00:32:57.646 } 00:32:57.646 ] 00:32:57.646 }' 00:32:57.646 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:57.646 02:38:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.213 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:58.213 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:58.213 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:58.213 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:58.213 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:58.213 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.213 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.472 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:58.472 "name": "raid_bdev1", 00:32:58.472 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:32:58.472 "strip_size_kb": 0, 00:32:58.472 "state": "online", 00:32:58.472 "raid_level": "raid1", 00:32:58.472 "superblock": true, 00:32:58.472 "num_base_bdevs": 2, 00:32:58.472 "num_base_bdevs_discovered": 1, 00:32:58.472 "num_base_bdevs_operational": 1, 00:32:58.472 "base_bdevs_list": [ 00:32:58.472 { 00:32:58.472 "name": null, 00:32:58.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.472 "is_configured": false, 00:32:58.472 "data_offset": 256, 00:32:58.472 "data_size": 7936 00:32:58.472 }, 00:32:58.472 { 00:32:58.472 "name": "BaseBdev2", 00:32:58.472 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:32:58.472 "is_configured": true, 00:32:58.472 "data_offset": 256, 00:32:58.472 "data_size": 7936 00:32:58.472 } 00:32:58.472 ] 00:32:58.472 }' 00:32:58.472 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:58.472 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:58.472 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:58.472 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:58.472 02:38:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:58.730 02:38:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:58.989 [2024-07-11 02:38:49.249009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:58.989 [2024-07-11 02:38:49.249056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:58.989 [2024-07-11 02:38:49.249079] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22aa0d0 00:32:58.989 [2024-07-11 02:38:49.249091] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:58.989 [2024-07-11 02:38:49.249255] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:58.989 [2024-07-11 02:38:49.249271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:58.989 [2024-07-11 02:38:49.249316] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:58.989 [2024-07-11 02:38:49.249327] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:58.989 [2024-07-11 02:38:49.249337] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:58.989 BaseBdev1 00:32:58.989 02:38:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.924 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.183 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:00.183 "name": "raid_bdev1", 00:33:00.183 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:33:00.183 "strip_size_kb": 0, 00:33:00.183 "state": "online", 00:33:00.183 "raid_level": "raid1", 00:33:00.183 "superblock": true, 00:33:00.183 "num_base_bdevs": 2, 00:33:00.183 "num_base_bdevs_discovered": 1, 00:33:00.183 "num_base_bdevs_operational": 1, 00:33:00.183 "base_bdevs_list": [ 00:33:00.183 { 00:33:00.183 "name": null, 00:33:00.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.183 "is_configured": false, 00:33:00.183 "data_offset": 256, 00:33:00.183 "data_size": 7936 00:33:00.183 }, 00:33:00.183 { 00:33:00.183 "name": "BaseBdev2", 00:33:00.183 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:33:00.183 "is_configured": true, 00:33:00.183 "data_offset": 256, 00:33:00.183 "data_size": 7936 00:33:00.183 } 00:33:00.183 ] 00:33:00.183 }' 00:33:00.183 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:00.183 02:38:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.752 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:00.752 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:00.752 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:00.752 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:00.752 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:00.752 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.752 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:01.011 "name": "raid_bdev1", 00:33:01.011 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:33:01.011 "strip_size_kb": 0, 00:33:01.011 "state": "online", 00:33:01.011 "raid_level": "raid1", 00:33:01.011 "superblock": true, 00:33:01.011 "num_base_bdevs": 2, 00:33:01.011 "num_base_bdevs_discovered": 1, 00:33:01.011 "num_base_bdevs_operational": 1, 00:33:01.011 "base_bdevs_list": [ 00:33:01.011 { 00:33:01.011 "name": null, 00:33:01.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.011 "is_configured": false, 00:33:01.011 "data_offset": 256, 00:33:01.011 "data_size": 7936 00:33:01.011 }, 00:33:01.011 { 00:33:01.011 "name": "BaseBdev2", 00:33:01.011 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:33:01.011 "is_configured": true, 00:33:01.011 "data_offset": 256, 00:33:01.011 "data_size": 7936 00:33:01.011 } 00:33:01.011 ] 00:33:01.011 }' 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:33:01.011 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:01.270 [2024-07-11 02:38:51.587243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:01.270 [2024-07-11 02:38:51.587362] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:01.270 [2024-07-11 02:38:51.587377] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:01.270 request: 00:33:01.270 { 00:33:01.270 "base_bdev": "BaseBdev1", 00:33:01.270 "raid_bdev": "raid_bdev1", 00:33:01.270 "method": "bdev_raid_add_base_bdev", 00:33:01.270 "req_id": 1 00:33:01.270 } 00:33:01.270 Got JSON-RPC error response 00:33:01.270 response: 00:33:01.270 { 00:33:01.270 "code": -22, 00:33:01.270 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:01.270 } 00:33:01.270 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:33:01.270 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.270 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.270 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.270 02:38:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.203 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.460 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.460 "name": "raid_bdev1", 00:33:02.460 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:33:02.460 "strip_size_kb": 0, 00:33:02.460 "state": "online", 00:33:02.460 "raid_level": "raid1", 00:33:02.460 "superblock": true, 00:33:02.460 "num_base_bdevs": 2, 00:33:02.460 "num_base_bdevs_discovered": 1, 00:33:02.460 "num_base_bdevs_operational": 1, 00:33:02.460 "base_bdevs_list": [ 00:33:02.460 { 00:33:02.460 "name": null, 00:33:02.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.460 "is_configured": false, 00:33:02.460 "data_offset": 256, 00:33:02.460 "data_size": 7936 00:33:02.460 }, 00:33:02.460 { 00:33:02.460 "name": "BaseBdev2", 00:33:02.460 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:33:02.460 "is_configured": true, 00:33:02.460 "data_offset": 256, 00:33:02.460 "data_size": 7936 00:33:02.460 } 00:33:02.460 ] 00:33:02.460 }' 00:33:02.460 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.460 02:38:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:03.396 "name": "raid_bdev1", 00:33:03.396 "uuid": "081267ec-c90b-417c-b9be-761fa4f93f92", 00:33:03.396 "strip_size_kb": 0, 00:33:03.396 "state": "online", 00:33:03.396 "raid_level": "raid1", 00:33:03.396 "superblock": true, 00:33:03.396 "num_base_bdevs": 2, 00:33:03.396 "num_base_bdevs_discovered": 1, 00:33:03.396 "num_base_bdevs_operational": 1, 00:33:03.396 "base_bdevs_list": [ 00:33:03.396 { 00:33:03.396 "name": null, 00:33:03.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.396 "is_configured": false, 00:33:03.396 "data_offset": 256, 00:33:03.396 "data_size": 7936 00:33:03.396 }, 00:33:03.396 { 00:33:03.396 "name": "BaseBdev2", 00:33:03.396 "uuid": "cd5f5e73-98cc-5f98-b01b-95cf167fe6a7", 00:33:03.396 "is_configured": true, 00:33:03.396 "data_offset": 256, 00:33:03.396 "data_size": 7936 00:33:03.396 } 00:33:03.396 ] 00:33:03.396 }' 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:03.396 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 2061596 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2061596 ']' 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2061596 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2061596 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2061596' 00:33:03.654 killing process with pid 2061596 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2061596 00:33:03.654 Received shutdown signal, test time was about 60.000000 seconds 00:33:03.654 00:33:03.654 Latency(us) 00:33:03.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.654 =================================================================================================================== 00:33:03.654 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:03.654 [2024-07-11 02:38:53.889809] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:03.654 [2024-07-11 02:38:53.889901] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:03.654 [2024-07-11 02:38:53.889946] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:03.654 [2024-07-11 02:38:53.889965] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22ada80 name raid_bdev1, state offline 00:33:03.654 02:38:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2061596 00:33:03.654 [2024-07-11 02:38:53.919945] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:03.913 02:38:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:33:03.913 00:33:03.913 real 0m29.420s 00:33:03.913 user 0m46.820s 00:33:03.913 sys 0m4.111s 00:33:03.913 02:38:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.913 02:38:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.913 ************************************ 00:33:03.913 END TEST raid_rebuild_test_sb_md_interleaved 00:33:03.913 ************************************ 00:33:03.913 02:38:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:03.913 02:38:54 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:33:03.913 02:38:54 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:33:03.913 02:38:54 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 2061596 ']' 00:33:03.913 02:38:54 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 2061596 00:33:03.913 02:38:54 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:33:03.913 00:33:03.913 real 19m16.600s 00:33:03.913 user 32m48.818s 00:33:03.913 sys 3m34.236s 00:33:03.913 02:38:54 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.913 02:38:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:03.913 ************************************ 00:33:03.913 END TEST bdev_raid 00:33:03.913 ************************************ 00:33:03.913 02:38:54 -- common/autotest_common.sh@1142 -- # return 0 00:33:03.913 02:38:54 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:33:03.913 02:38:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:03.913 02:38:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.913 02:38:54 -- common/autotest_common.sh@10 -- # set +x 00:33:03.913 ************************************ 00:33:03.913 START TEST bdevperf_config 00:33:03.913 ************************************ 00:33:03.913 02:38:54 bdevperf_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:33:04.172 * Looking for test storage... 00:33:04.172 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:04.172 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:04.172 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:04.172 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:04.172 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:04.172 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:04.172 02:38:54 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:06.703 02:38:57 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-11 02:38:54.483614] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:06.703 [2024-07-11 02:38:54.483683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065834 ] 00:33:06.703 Using job config with 4 jobs 00:33:06.703 [2024-07-11 02:38:54.635924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.703 [2024-07-11 02:38:54.701938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.703 cpumask for '\''job0'\'' is too big 00:33:06.703 cpumask for '\''job1'\'' is too big 00:33:06.703 cpumask for '\''job2'\'' is too big 00:33:06.703 cpumask for '\''job3'\'' is too big 00:33:06.703 Running I/O for 2 seconds... 00:33:06.703 00:33:06.703 Latency(us) 00:33:06.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.703 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.703 Malloc0 : 2.02 23988.74 23.43 0.00 0.00 10667.20 1837.86 16298.52 00:33:06.703 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23966.74 23.41 0.00 0.00 10652.75 1837.86 14417.92 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23944.84 23.38 0.00 0.00 10638.87 1837.86 12594.31 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.03 24017.40 23.45 0.00 0.00 10583.51 933.18 10998.65 00:33:06.704 =================================================================================================================== 00:33:06.704 Total : 95917.73 93.67 0.00 0.00 10635.51 933.18 16298.52' 00:33:06.704 02:38:57 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-11 02:38:54.483614] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:06.704 [2024-07-11 02:38:54.483683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065834 ] 00:33:06.704 Using job config with 4 jobs 00:33:06.704 [2024-07-11 02:38:54.635924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.704 [2024-07-11 02:38:54.701938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.704 cpumask for '\''job0'\'' is too big 00:33:06.704 cpumask for '\''job1'\'' is too big 00:33:06.704 cpumask for '\''job2'\'' is too big 00:33:06.704 cpumask for '\''job3'\'' is too big 00:33:06.704 Running I/O for 2 seconds... 00:33:06.704 00:33:06.704 Latency(us) 00:33:06.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23988.74 23.43 0.00 0.00 10667.20 1837.86 16298.52 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23966.74 23.41 0.00 0.00 10652.75 1837.86 14417.92 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23944.84 23.38 0.00 0.00 10638.87 1837.86 12594.31 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.03 24017.40 23.45 0.00 0.00 10583.51 933.18 10998.65 00:33:06.704 =================================================================================================================== 00:33:06.704 Total : 95917.73 93.67 0.00 0.00 10635.51 933.18 16298.52' 00:33:06.704 02:38:57 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-11 02:38:54.483614] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:06.704 [2024-07-11 02:38:54.483683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065834 ] 00:33:06.704 Using job config with 4 jobs 00:33:06.704 [2024-07-11 02:38:54.635924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.704 [2024-07-11 02:38:54.701938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.704 cpumask for '\''job0'\'' is too big 00:33:06.704 cpumask for '\''job1'\'' is too big 00:33:06.704 cpumask for '\''job2'\'' is too big 00:33:06.704 cpumask for '\''job3'\'' is too big 00:33:06.704 Running I/O for 2 seconds... 00:33:06.704 00:33:06.704 Latency(us) 00:33:06.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23988.74 23.43 0.00 0.00 10667.20 1837.86 16298.52 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23966.74 23.41 0.00 0.00 10652.75 1837.86 14417.92 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.02 23944.84 23.38 0.00 0.00 10638.87 1837.86 12594.31 00:33:06.704 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:06.704 Malloc0 : 2.03 24017.40 23.45 0.00 0.00 10583.51 933.18 10998.65 00:33:06.704 =================================================================================================================== 00:33:06.704 Total : 95917.73 93.67 0.00 0.00 10635.51 933.18 16298.52' 00:33:06.704 02:38:57 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:33:06.704 02:38:57 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:33:06.704 02:38:57 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:33:06.704 02:38:57 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:06.963 [2024-07-11 02:38:57.151674] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:06.963 [2024-07-11 02:38:57.151751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066130 ] 00:33:06.963 [2024-07-11 02:38:57.303507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.963 [2024-07-11 02:38:57.375347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.223 cpumask for 'job0' is too big 00:33:07.223 cpumask for 'job1' is too big 00:33:07.223 cpumask for 'job2' is too big 00:33:07.223 cpumask for 'job3' is too big 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:33:09.758 Running I/O for 2 seconds... 00:33:09.758 00:33:09.758 Latency(us) 00:33:09.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:09.758 Malloc0 : 2.02 24091.85 23.53 0.00 0.00 10618.13 1837.86 16298.52 00:33:09.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:09.758 Malloc0 : 2.02 24069.81 23.51 0.00 0.00 10604.32 1823.61 14360.93 00:33:09.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:09.758 Malloc0 : 2.02 24047.87 23.48 0.00 0.00 10590.39 1816.49 12537.32 00:33:09.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:33:09.758 Malloc0 : 2.02 24025.99 23.46 0.00 0.00 10576.95 1852.10 10998.65 00:33:09.758 =================================================================================================================== 00:33:09.758 Total : 96235.52 93.98 0.00 0.00 10597.45 1816.49 16298.52' 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:09.758 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:33:09.758 02:38:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:09.759 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:09.759 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:09.759 02:38:59 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-11 02:38:59.882085] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:12.294 [2024-07-11 02:38:59.882150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066467 ] 00:33:12.294 Using job config with 3 jobs 00:33:12.294 [2024-07-11 02:39:00.039577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.294 [2024-07-11 02:39:00.110451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.294 cpumask for '\''job0'\'' is too big 00:33:12.294 cpumask for '\''job1'\'' is too big 00:33:12.294 cpumask for '\''job2'\'' is too big 00:33:12.294 Running I/O for 2 seconds... 00:33:12.294 00:33:12.294 Latency(us) 00:33:12.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.01 32604.25 31.84 0.00 0.00 7833.76 1795.12 11511.54 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.02 32616.48 31.85 0.00 0.00 7813.66 1766.62 9687.93 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.02 32586.72 31.82 0.00 0.00 7803.92 1787.99 8206.25 00:33:12.294 =================================================================================================================== 00:33:12.294 Total : 97807.44 95.52 0.00 0.00 7817.09 1766.62 11511.54' 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-11 02:38:59.882085] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:12.294 [2024-07-11 02:38:59.882150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066467 ] 00:33:12.294 Using job config with 3 jobs 00:33:12.294 [2024-07-11 02:39:00.039577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.294 [2024-07-11 02:39:00.110451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.294 cpumask for '\''job0'\'' is too big 00:33:12.294 cpumask for '\''job1'\'' is too big 00:33:12.294 cpumask for '\''job2'\'' is too big 00:33:12.294 Running I/O for 2 seconds... 00:33:12.294 00:33:12.294 Latency(us) 00:33:12.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.01 32604.25 31.84 0.00 0.00 7833.76 1795.12 11511.54 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.02 32616.48 31.85 0.00 0.00 7813.66 1766.62 9687.93 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.02 32586.72 31.82 0.00 0.00 7803.92 1787.99 8206.25 00:33:12.294 =================================================================================================================== 00:33:12.294 Total : 97807.44 95.52 0.00 0.00 7817.09 1766.62 11511.54' 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-11 02:38:59.882085] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:12.294 [2024-07-11 02:38:59.882150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066467 ] 00:33:12.294 Using job config with 3 jobs 00:33:12.294 [2024-07-11 02:39:00.039577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.294 [2024-07-11 02:39:00.110451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.294 cpumask for '\''job0'\'' is too big 00:33:12.294 cpumask for '\''job1'\'' is too big 00:33:12.294 cpumask for '\''job2'\'' is too big 00:33:12.294 Running I/O for 2 seconds... 00:33:12.294 00:33:12.294 Latency(us) 00:33:12.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.01 32604.25 31.84 0.00 0.00 7833.76 1795.12 11511.54 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.02 32616.48 31.85 0.00 0.00 7813.66 1766.62 9687.93 00:33:12.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:33:12.294 Malloc0 : 2.02 32586.72 31.82 0.00 0.00 7803.92 1787.99 8206.25 00:33:12.294 =================================================================================================================== 00:33:12.294 Total : 97807.44 95.52 0.00 0.00 7817.09 1766.62 11511.54' 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:12.294 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:33:12.294 02:39:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:12.295 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:12.295 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:12.295 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:12.295 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:12.295 02:39:02 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:15.588 02:39:05 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-11 02:39:02.656397] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:15.588 [2024-07-11 02:39:02.656532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066943 ] 00:33:15.588 Using job config with 4 jobs 00:33:15.588 [2024-07-11 02:39:02.886803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.588 [2024-07-11 02:39:02.952905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.588 cpumask for '\''job0'\'' is too big 00:33:15.588 cpumask for '\''job1'\'' is too big 00:33:15.588 cpumask for '\''job2'\'' is too big 00:33:15.588 cpumask for '\''job3'\'' is too big 00:33:15.588 Running I/O for 2 seconds... 00:33:15.588 00:33:15.588 Latency(us) 00:33:15.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.03 11993.92 11.71 0.00 0.00 21335.40 3789.69 33052.94 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.03 11982.76 11.70 0.00 0.00 21335.05 4644.51 33052.94 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.03 11971.86 11.69 0.00 0.00 21277.37 3761.20 29177.77 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 12007.75 11.73 0.00 0.00 21197.23 4587.52 29177.77 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.05 11996.93 11.72 0.00 0.00 21138.38 3732.70 25416.57 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 11985.86 11.70 0.00 0.00 21137.20 4616.01 25416.57 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.05 11975.10 11.69 0.00 0.00 21082.94 3761.20 21769.35 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 11964.14 11.68 0.00 0.00 21081.79 4587.52 21769.35 00:33:15.588 =================================================================================================================== 00:33:15.588 Total : 95878.32 93.63 0.00 0.00 21197.71 3732.70 33052.94' 00:33:15.588 02:39:05 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-11 02:39:02.656397] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:15.588 [2024-07-11 02:39:02.656532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066943 ] 00:33:15.588 Using job config with 4 jobs 00:33:15.588 [2024-07-11 02:39:02.886803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.588 [2024-07-11 02:39:02.952905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.588 cpumask for '\''job0'\'' is too big 00:33:15.588 cpumask for '\''job1'\'' is too big 00:33:15.588 cpumask for '\''job2'\'' is too big 00:33:15.588 cpumask for '\''job3'\'' is too big 00:33:15.588 Running I/O for 2 seconds... 00:33:15.588 00:33:15.588 Latency(us) 00:33:15.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.03 11993.92 11.71 0.00 0.00 21335.40 3789.69 33052.94 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.03 11982.76 11.70 0.00 0.00 21335.05 4644.51 33052.94 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.03 11971.86 11.69 0.00 0.00 21277.37 3761.20 29177.77 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 12007.75 11.73 0.00 0.00 21197.23 4587.52 29177.77 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.05 11996.93 11.72 0.00 0.00 21138.38 3732.70 25416.57 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 11985.86 11.70 0.00 0.00 21137.20 4616.01 25416.57 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.05 11975.10 11.69 0.00 0.00 21082.94 3761.20 21769.35 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 11964.14 11.68 0.00 0.00 21081.79 4587.52 21769.35 00:33:15.588 =================================================================================================================== 00:33:15.588 Total : 95878.32 93.63 0.00 0.00 21197.71 3732.70 33052.94' 00:33:15.588 02:39:05 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-11 02:39:02.656397] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:15.588 [2024-07-11 02:39:02.656532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066943 ] 00:33:15.588 Using job config with 4 jobs 00:33:15.588 [2024-07-11 02:39:02.886803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.588 [2024-07-11 02:39:02.952905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.588 cpumask for '\''job0'\'' is too big 00:33:15.588 cpumask for '\''job1'\'' is too big 00:33:15.588 cpumask for '\''job2'\'' is too big 00:33:15.588 cpumask for '\''job3'\'' is too big 00:33:15.588 Running I/O for 2 seconds... 00:33:15.588 00:33:15.588 Latency(us) 00:33:15.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.03 11993.92 11.71 0.00 0.00 21335.40 3789.69 33052.94 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.03 11982.76 11.70 0.00 0.00 21335.05 4644.51 33052.94 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.03 11971.86 11.69 0.00 0.00 21277.37 3761.20 29177.77 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 12007.75 11.73 0.00 0.00 21197.23 4587.52 29177.77 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.05 11996.93 11.72 0.00 0.00 21138.38 3732.70 25416.57 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc1 : 2.05 11985.86 11.70 0.00 0.00 21137.20 4616.01 25416.57 00:33:15.588 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.588 Malloc0 : 2.05 11975.10 11.69 0.00 0.00 21082.94 3761.20 21769.35 00:33:15.588 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:33:15.589 Malloc1 : 2.05 11964.14 11.68 0.00 0.00 21081.79 4587.52 21769.35 00:33:15.589 =================================================================================================================== 00:33:15.589 Total : 95878.32 93.63 0.00 0.00 21197.71 3732.70 33052.94' 00:33:15.589 02:39:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:33:15.589 02:39:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:33:15.589 02:39:05 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:33:15.589 02:39:05 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:33:15.589 02:39:05 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:15.589 02:39:05 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:33:15.589 00:33:15.589 real 0m11.129s 00:33:15.589 user 0m9.668s 00:33:15.589 sys 0m1.287s 00:33:15.589 02:39:05 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:15.589 02:39:05 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:33:15.589 ************************************ 00:33:15.589 END TEST bdevperf_config 00:33:15.589 ************************************ 00:33:15.589 02:39:05 -- common/autotest_common.sh@1142 -- # return 0 00:33:15.589 02:39:05 -- spdk/autotest.sh@192 -- # uname -s 00:33:15.589 02:39:05 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:33:15.589 02:39:05 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:33:15.589 02:39:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:15.589 02:39:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:15.589 02:39:05 -- common/autotest_common.sh@10 -- # set +x 00:33:15.589 ************************************ 00:33:15.589 START TEST reactor_set_interrupt 00:33:15.589 ************************************ 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:33:15.589 * Looking for test storage... 00:33:15.589 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.589 02:39:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:33:15.589 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:33:15.589 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.589 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.589 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:33:15.589 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:15.589 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:33:15.589 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:33:15.589 02:39:05 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:33:15.590 02:39:05 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:33:15.590 #define SPDK_CONFIG_H 00:33:15.590 #define SPDK_CONFIG_APPS 1 00:33:15.590 #define SPDK_CONFIG_ARCH native 00:33:15.590 #undef SPDK_CONFIG_ASAN 00:33:15.590 #undef SPDK_CONFIG_AVAHI 00:33:15.590 #undef SPDK_CONFIG_CET 00:33:15.590 #define SPDK_CONFIG_COVERAGE 1 00:33:15.590 #define SPDK_CONFIG_CROSS_PREFIX 00:33:15.590 #define SPDK_CONFIG_CRYPTO 1 00:33:15.590 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:33:15.590 #undef SPDK_CONFIG_CUSTOMOCF 00:33:15.590 #undef SPDK_CONFIG_DAOS 00:33:15.590 #define SPDK_CONFIG_DAOS_DIR 00:33:15.590 #define SPDK_CONFIG_DEBUG 1 00:33:15.590 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:33:15.590 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:33:15.590 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:33:15.590 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:15.590 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:33:15.590 #undef SPDK_CONFIG_DPDK_UADK 00:33:15.590 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:33:15.590 #define SPDK_CONFIG_EXAMPLES 1 00:33:15.590 #undef SPDK_CONFIG_FC 00:33:15.590 #define SPDK_CONFIG_FC_PATH 00:33:15.590 #define SPDK_CONFIG_FIO_PLUGIN 1 00:33:15.590 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:33:15.590 #undef SPDK_CONFIG_FUSE 00:33:15.590 #undef SPDK_CONFIG_FUZZER 00:33:15.590 #define SPDK_CONFIG_FUZZER_LIB 00:33:15.590 #undef SPDK_CONFIG_GOLANG 00:33:15.590 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:33:15.590 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:33:15.590 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:33:15.590 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:33:15.590 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:33:15.590 #undef SPDK_CONFIG_HAVE_LIBBSD 00:33:15.590 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:33:15.590 #define SPDK_CONFIG_IDXD 1 00:33:15.590 #define SPDK_CONFIG_IDXD_KERNEL 1 00:33:15.590 #define SPDK_CONFIG_IPSEC_MB 1 00:33:15.590 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:33:15.590 #define SPDK_CONFIG_ISAL 1 00:33:15.590 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:33:15.590 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:33:15.590 #define SPDK_CONFIG_LIBDIR 00:33:15.590 #undef SPDK_CONFIG_LTO 00:33:15.590 #define SPDK_CONFIG_MAX_LCORES 128 00:33:15.590 #define SPDK_CONFIG_NVME_CUSE 1 00:33:15.590 #undef SPDK_CONFIG_OCF 00:33:15.590 #define SPDK_CONFIG_OCF_PATH 00:33:15.590 #define SPDK_CONFIG_OPENSSL_PATH 00:33:15.590 #undef SPDK_CONFIG_PGO_CAPTURE 00:33:15.590 #define SPDK_CONFIG_PGO_DIR 00:33:15.590 #undef SPDK_CONFIG_PGO_USE 00:33:15.590 #define SPDK_CONFIG_PREFIX /usr/local 00:33:15.590 #undef SPDK_CONFIG_RAID5F 00:33:15.590 #undef SPDK_CONFIG_RBD 00:33:15.590 #define SPDK_CONFIG_RDMA 1 00:33:15.590 #define SPDK_CONFIG_RDMA_PROV verbs 00:33:15.590 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:33:15.590 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:33:15.590 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:33:15.590 #define SPDK_CONFIG_SHARED 1 00:33:15.590 #undef SPDK_CONFIG_SMA 00:33:15.590 #define SPDK_CONFIG_TESTS 1 00:33:15.590 #undef SPDK_CONFIG_TSAN 00:33:15.590 #define SPDK_CONFIG_UBLK 1 00:33:15.590 #define SPDK_CONFIG_UBSAN 1 00:33:15.590 #undef SPDK_CONFIG_UNIT_TESTS 00:33:15.590 #undef SPDK_CONFIG_URING 00:33:15.590 #define SPDK_CONFIG_URING_PATH 00:33:15.590 #undef SPDK_CONFIG_URING_ZNS 00:33:15.590 #undef SPDK_CONFIG_USDT 00:33:15.590 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:33:15.590 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:33:15.590 #undef SPDK_CONFIG_VFIO_USER 00:33:15.590 #define SPDK_CONFIG_VFIO_USER_DIR 00:33:15.590 #define SPDK_CONFIG_VHOST 1 00:33:15.590 #define SPDK_CONFIG_VIRTIO 1 00:33:15.590 #undef SPDK_CONFIG_VTUNE 00:33:15.590 #define SPDK_CONFIG_VTUNE_DIR 00:33:15.590 #define SPDK_CONFIG_WERROR 1 00:33:15.590 #define SPDK_CONFIG_WPDK_DIR 00:33:15.590 #undef SPDK_CONFIG_XNVME 00:33:15.590 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:33:15.590 02:39:05 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.590 02:39:05 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.590 02:39:05 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.590 02:39:05 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.590 02:39:05 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:33:15.590 02:39:05 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:33:15.590 02:39:05 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:33:15.590 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : v22.11.4 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:15.591 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 2067700 ]] 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 2067700 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.gVTnhC 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.gVTnhC/tests/interrupt /tmp/spdk.gVTnhC 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=82033504256 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508572672 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=12475068416 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:33:15.592 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=18892214272 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9502720 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=47253188608 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=1097728 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:33:15.593 * Looking for test storage... 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=82033504256 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=14689660928 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.593 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2067834 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:33:15.593 02:39:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2067834 /var/tmp/spdk.sock 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2067834 ']' 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:15.593 02:39:05 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.593 [2024-07-11 02:39:05.849526] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:15.593 [2024-07-11 02:39:05.849596] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067834 ] 00:33:15.593 [2024-07-11 02:39:05.989440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:15.853 [2024-07-11 02:39:06.044548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.853 [2024-07-11 02:39:06.046793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.853 [2024-07-11 02:39:06.046797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.853 [2024-07-11 02:39:06.113059] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.422 02:39:06 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:16.422 02:39:06 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:33:16.422 02:39:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:33:16.422 02:39:06 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:16.681 Malloc0 00:33:16.681 Malloc1 00:33:16.681 Malloc2 00:33:16.681 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:33:16.681 02:39:07 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:33:16.681 02:39:07 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:16.681 02:39:07 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:33:16.681 5000+0 records in 00:33:16.681 5000+0 records out 00:33:16.681 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0254105 s, 403 MB/s 00:33:16.681 02:39:07 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:33:16.940 AIO0 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 2067834 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 2067834 without_thd 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2067834 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:33:16.940 02:39:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:33:17.200 02:39:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:33:17.460 spdk_thread ids are 1 on reactor0. 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2067834 0 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2067834 0 idle 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2067834 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2067834 -w 256 00:33:17.460 02:39:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2067834 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.36 reactor_0' 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2067834 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.36 reactor_0 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2067834 1 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2067834 1 idle 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2067834 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2067834 -w 256 00:33:17.720 02:39:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:33:17.979 02:39:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2067858 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.00 reactor_1' 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2067858 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.00 reactor_1 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2067834 2 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2067834 2 idle 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2067834 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2067834 -w 256 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2067859 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.00 reactor_2' 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2067859 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.00 reactor_2 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:17.980 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:33:18.240 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:33:18.240 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:33:18.240 [2024-07-11 02:39:08.635920] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:18.240 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:33:18.499 [2024-07-11 02:39:08.887591] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:33:18.499 [2024-07-11 02:39:08.888048] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:18.499 02:39:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:33:18.759 [2024-07-11 02:39:09.131514] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:33:18.759 [2024-07-11 02:39:09.131740] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2067834 0 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2067834 0 busy 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2067834 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2067834 -w 256 00:33:18.759 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2067834 root 20 0 128.2g 34560 21888 R 99.9 0.0 0:00.79 reactor_0' 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2067834 root 20 0 128.2g 34560 21888 R 99.9 0.0 0:00.79 reactor_0 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2067834 2 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2067834 2 busy 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2067834 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2067834 -w 256 00:33:19.018 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2067859 root 20 0 128.2g 34560 21888 R 93.8 0.0 0:00.36 reactor_2' 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2067859 root 20 0 128.2g 34560 21888 R 93.8 0.0 0:00.36 reactor_2 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:19.277 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:33:19.277 [2024-07-11 02:39:09.683512] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:33:19.277 [2024-07-11 02:39:09.683645] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2067834 2 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2067834 2 idle 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2067834 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2067834 -w 256 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2067859 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.54 reactor_2' 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2067859 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:00.54 reactor_2 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:19.536 02:39:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:19.537 02:39:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:19.537 02:39:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:19.537 02:39:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:19.537 02:39:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:19.537 02:39:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:33:19.796 [2024-07-11 02:39:10.131515] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:33:19.796 [2024-07-11 02:39:10.131693] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:19.796 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:33:19.796 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:33:19.796 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:33:20.055 [2024-07-11 02:39:10.323818] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2067834 0 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2067834 0 idle 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2067834 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2067834 -w 256 00:33:20.056 02:39:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2067834 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:01.60 reactor_0' 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2067834 root 20 0 128.2g 34560 21888 S 0.0 0.0 0:01.60 reactor_0 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:33:20.315 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 2067834 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2067834 ']' 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2067834 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2067834 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2067834' 00:33:20.315 killing process with pid 2067834 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2067834 00:33:20.315 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2067834 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2068536 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:33:20.573 02:39:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2068536 /var/tmp/spdk.sock 00:33:20.573 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2068536 ']' 00:33:20.573 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.573 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:20.573 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.573 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:20.573 02:39:10 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:20.573 [2024-07-11 02:39:10.878403] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:20.573 [2024-07-11 02:39:10.878471] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068536 ] 00:33:20.832 [2024-07-11 02:39:11.013647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:20.832 [2024-07-11 02:39:11.064088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.832 [2024-07-11 02:39:11.064188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.832 [2024-07-11 02:39:11.064190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.832 [2024-07-11 02:39:11.130190] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:20.832 02:39:11 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:20.832 02:39:11 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:33:20.832 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:33:20.832 02:39:11 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:21.091 Malloc0 00:33:21.091 Malloc1 00:33:21.091 Malloc2 00:33:21.091 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:33:21.091 02:39:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:33:21.091 02:39:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:21.091 02:39:11 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:33:21.091 5000+0 records in 00:33:21.091 5000+0 records out 00:33:21.091 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0266804 s, 384 MB/s 00:33:21.091 02:39:11 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:33:21.350 AIO0 00:33:21.350 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 2068536 00:33:21.350 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 2068536 00:33:21.350 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2068536 00:33:21.350 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:33:21.350 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:33:21.350 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:33:21.350 02:39:11 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:33:21.351 02:39:11 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:33:21.351 02:39:11 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:33:21.351 02:39:11 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:21.351 02:39:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:33:21.351 02:39:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:33:21.610 02:39:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:33:21.870 spdk_thread ids are 1 on reactor0. 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2068536 0 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2068536 0 idle 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2068536 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2068536 -w 256 00:33:21.870 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2068536 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.33 reactor_0' 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2068536 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.33 reactor_0 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:33:22.130 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2068536 1 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2068536 1 idle 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2068536 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2068536 -w 256 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2068544 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.00 reactor_1' 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2068544 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.00 reactor_1 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2068536 2 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2068536 2 idle 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2068536 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2068536 -w 256 00:33:22.131 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2068545 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.00 reactor_2' 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2068545 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.00 reactor_2 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:22.390 02:39:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:22.391 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:33:22.391 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:33:22.650 [2024-07-11 02:39:12.956870] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:33:22.650 [2024-07-11 02:39:12.957059] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:33:22.650 [2024-07-11 02:39:12.957237] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:22.650 02:39:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:33:22.909 [2024-07-11 02:39:13.205409] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:33:22.909 [2024-07-11 02:39:13.205631] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2068536 0 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2068536 0 busy 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2068536 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2068536 -w 256 00:33:22.909 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2068536 root 20 0 128.2g 35712 22464 R 99.9 0.0 0:00.76 reactor_0' 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2068536 root 20 0 128.2g 35712 22464 R 99.9 0.0 0:00.76 reactor_0 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2068536 2 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2068536 2 busy 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2068536 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2068536 -w 256 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2068545 root 20 0 128.2g 35712 22464 R 93.3 0.0 0:00.35 reactor_2' 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2068545 root 20 0 128.2g 35712 22464 R 93.3 0.0 0:00.35 reactor_2 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.3 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:23.168 02:39:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:33:23.427 [2024-07-11 02:39:13.807113] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:33:23.427 [2024-07-11 02:39:13.807232] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2068536 2 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2068536 2 idle 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2068536 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2068536 -w 256 00:33:23.427 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:33:23.685 02:39:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2068545 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.59 reactor_2' 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:23.685 02:39:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2068545 root 20 0 128.2g 35712 22464 S 0.0 0.0 0:00.59 reactor_2 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:23.685 02:39:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:33:24.255 [2024-07-11 02:39:14.504954] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:33:24.255 [2024-07-11 02:39:14.505199] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:33:24.255 [2024-07-11 02:39:14.505225] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2068536 0 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2068536 0 idle 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2068536 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2068536 -w 256 00:33:24.255 02:39:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2068536 root 20 0 128.2g 35712 22464 S 6.7 0.0 0:01.88 reactor_0' 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2068536 root 20 0 128.2g 35712 22464 S 6.7 0.0 0:01.88 reactor_0 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:33:24.515 02:39:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 2068536 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2068536 ']' 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2068536 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2068536 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2068536' 00:33:24.515 killing process with pid 2068536 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2068536 00:33:24.515 02:39:14 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2068536 00:33:24.774 02:39:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:33:24.774 02:39:15 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:33:24.774 00:33:24.774 real 0m9.532s 00:33:24.774 user 0m9.465s 00:33:24.774 sys 0m2.193s 00:33:24.774 02:39:15 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:24.774 02:39:15 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:24.775 ************************************ 00:33:24.775 END TEST reactor_set_interrupt 00:33:24.775 ************************************ 00:33:24.775 02:39:15 -- common/autotest_common.sh@1142 -- # return 0 00:33:24.775 02:39:15 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:33:24.775 02:39:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:24.775 02:39:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.775 02:39:15 -- common/autotest_common.sh@10 -- # set +x 00:33:24.775 ************************************ 00:33:24.775 START TEST reap_unregistered_poller 00:33:24.775 ************************************ 00:33:24.775 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:33:25.038 * Looking for test storage... 00:33:25.038 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.038 02:39:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:33:25.038 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:33:25.038 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.038 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.038 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:33:25.038 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:25.038 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:33:25.038 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:33:25.038 02:39:15 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:33:25.039 02:39:15 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:33:25.039 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:33:25.039 #define SPDK_CONFIG_H 00:33:25.039 #define SPDK_CONFIG_APPS 1 00:33:25.039 #define SPDK_CONFIG_ARCH native 00:33:25.039 #undef SPDK_CONFIG_ASAN 00:33:25.039 #undef SPDK_CONFIG_AVAHI 00:33:25.039 #undef SPDK_CONFIG_CET 00:33:25.039 #define SPDK_CONFIG_COVERAGE 1 00:33:25.039 #define SPDK_CONFIG_CROSS_PREFIX 00:33:25.039 #define SPDK_CONFIG_CRYPTO 1 00:33:25.039 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:33:25.039 #undef SPDK_CONFIG_CUSTOMOCF 00:33:25.039 #undef SPDK_CONFIG_DAOS 00:33:25.039 #define SPDK_CONFIG_DAOS_DIR 00:33:25.039 #define SPDK_CONFIG_DEBUG 1 00:33:25.039 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:33:25.039 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:33:25.039 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:33:25.039 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:25.039 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:33:25.039 #undef SPDK_CONFIG_DPDK_UADK 00:33:25.039 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:33:25.039 #define SPDK_CONFIG_EXAMPLES 1 00:33:25.039 #undef SPDK_CONFIG_FC 00:33:25.039 #define SPDK_CONFIG_FC_PATH 00:33:25.039 #define SPDK_CONFIG_FIO_PLUGIN 1 00:33:25.039 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:33:25.039 #undef SPDK_CONFIG_FUSE 00:33:25.039 #undef SPDK_CONFIG_FUZZER 00:33:25.039 #define SPDK_CONFIG_FUZZER_LIB 00:33:25.039 #undef SPDK_CONFIG_GOLANG 00:33:25.039 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:33:25.039 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:33:25.039 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:33:25.039 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:33:25.039 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:33:25.039 #undef SPDK_CONFIG_HAVE_LIBBSD 00:33:25.039 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:33:25.039 #define SPDK_CONFIG_IDXD 1 00:33:25.039 #define SPDK_CONFIG_IDXD_KERNEL 1 00:33:25.039 #define SPDK_CONFIG_IPSEC_MB 1 00:33:25.039 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:33:25.039 #define SPDK_CONFIG_ISAL 1 00:33:25.039 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:33:25.039 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:33:25.039 #define SPDK_CONFIG_LIBDIR 00:33:25.039 #undef SPDK_CONFIG_LTO 00:33:25.039 #define SPDK_CONFIG_MAX_LCORES 128 00:33:25.039 #define SPDK_CONFIG_NVME_CUSE 1 00:33:25.039 #undef SPDK_CONFIG_OCF 00:33:25.039 #define SPDK_CONFIG_OCF_PATH 00:33:25.039 #define SPDK_CONFIG_OPENSSL_PATH 00:33:25.039 #undef SPDK_CONFIG_PGO_CAPTURE 00:33:25.039 #define SPDK_CONFIG_PGO_DIR 00:33:25.039 #undef SPDK_CONFIG_PGO_USE 00:33:25.039 #define SPDK_CONFIG_PREFIX /usr/local 00:33:25.039 #undef SPDK_CONFIG_RAID5F 00:33:25.039 #undef SPDK_CONFIG_RBD 00:33:25.039 #define SPDK_CONFIG_RDMA 1 00:33:25.039 #define SPDK_CONFIG_RDMA_PROV verbs 00:33:25.039 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:33:25.039 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:33:25.039 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:33:25.039 #define SPDK_CONFIG_SHARED 1 00:33:25.039 #undef SPDK_CONFIG_SMA 00:33:25.039 #define SPDK_CONFIG_TESTS 1 00:33:25.039 #undef SPDK_CONFIG_TSAN 00:33:25.039 #define SPDK_CONFIG_UBLK 1 00:33:25.039 #define SPDK_CONFIG_UBSAN 1 00:33:25.039 #undef SPDK_CONFIG_UNIT_TESTS 00:33:25.039 #undef SPDK_CONFIG_URING 00:33:25.039 #define SPDK_CONFIG_URING_PATH 00:33:25.039 #undef SPDK_CONFIG_URING_ZNS 00:33:25.039 #undef SPDK_CONFIG_USDT 00:33:25.039 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:33:25.039 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:33:25.039 #undef SPDK_CONFIG_VFIO_USER 00:33:25.039 #define SPDK_CONFIG_VFIO_USER_DIR 00:33:25.039 #define SPDK_CONFIG_VHOST 1 00:33:25.039 #define SPDK_CONFIG_VIRTIO 1 00:33:25.039 #undef SPDK_CONFIG_VTUNE 00:33:25.039 #define SPDK_CONFIG_VTUNE_DIR 00:33:25.039 #define SPDK_CONFIG_WERROR 1 00:33:25.039 #define SPDK_CONFIG_WPDK_DIR 00:33:25.039 #undef SPDK_CONFIG_XNVME 00:33:25.039 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:33:25.039 02:39:15 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:33:25.039 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:33:25.039 02:39:15 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.039 02:39:15 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.039 02:39:15 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.039 02:39:15 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.039 02:39:15 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.039 02:39:15 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.039 02:39:15 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:33:25.039 02:39:15 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.039 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:33:25.039 02:39:15 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:33:25.040 02:39:15 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : v22.11.4 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:33:25.040 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 2069197 ]] 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 2069197 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.dJLH8d 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.dJLH8d/tests/interrupt /tmp/spdk.dJLH8d 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=82032754688 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508572672 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=12475817984 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:33:25.041 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=18892214272 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9502720 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=47253188608 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=1097728 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:33:25.042 * Looking for test storage... 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=82032754688 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=14690410496 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.042 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2069330 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:33:25.042 02:39:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2069330 /var/tmp/spdk.sock 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 2069330 ']' 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:25.042 02:39:15 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:25.302 [2024-07-11 02:39:15.457672] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:25.302 [2024-07-11 02:39:15.457738] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069330 ] 00:33:25.302 [2024-07-11 02:39:15.594900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:25.302 [2024-07-11 02:39:15.648154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.302 [2024-07-11 02:39:15.648254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.302 [2024-07-11 02:39:15.648254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.302 [2024-07-11 02:39:15.721499] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:26.240 02:39:16 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:26.240 02:39:16 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:33:26.240 02:39:16 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.240 02:39:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:26.240 02:39:16 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:33:26.240 "name": "app_thread", 00:33:26.240 "id": 1, 00:33:26.240 "active_pollers": [], 00:33:26.240 "timed_pollers": [ 00:33:26.240 { 00:33:26.240 "name": "rpc_subsystem_poll_servers", 00:33:26.240 "id": 1, 00:33:26.240 "state": "waiting", 00:33:26.240 "run_count": 0, 00:33:26.240 "busy_count": 0, 00:33:26.240 "period_ticks": 9200000 00:33:26.240 } 00:33:26.240 ], 00:33:26.240 "paused_pollers": [] 00:33:26.240 }' 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:33:26.240 5000+0 records in 00:33:26.240 5000+0 records out 00:33:26.240 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0374787 s, 273 MB/s 00:33:26.240 02:39:16 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:33:26.543 AIO0 00:33:26.543 02:39:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:26.851 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:33:26.851 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:33:26.851 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:33:26.851 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.851 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:26.851 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:33:27.110 "name": "app_thread", 00:33:27.110 "id": 1, 00:33:27.110 "active_pollers": [], 00:33:27.110 "timed_pollers": [ 00:33:27.110 { 00:33:27.110 "name": "rpc_subsystem_poll_servers", 00:33:27.110 "id": 1, 00:33:27.110 "state": "waiting", 00:33:27.110 "run_count": 0, 00:33:27.110 "busy_count": 0, 00:33:27.110 "period_ticks": 9200000 00:33:27.110 } 00:33:27.110 ], 00:33:27.110 "paused_pollers": [] 00:33:27.110 }' 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:33:27.110 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 2069330 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 2069330 ']' 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 2069330 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2069330 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2069330' 00:33:27.110 killing process with pid 2069330 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 2069330 00:33:27.110 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 2069330 00:33:27.369 02:39:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:33:27.369 02:39:17 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:33:27.369 00:33:27.369 real 0m2.538s 00:33:27.369 user 0m1.555s 00:33:27.369 sys 0m0.693s 00:33:27.369 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:27.369 02:39:17 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:27.369 ************************************ 00:33:27.369 END TEST reap_unregistered_poller 00:33:27.369 ************************************ 00:33:27.369 02:39:17 -- common/autotest_common.sh@1142 -- # return 0 00:33:27.369 02:39:17 -- spdk/autotest.sh@198 -- # uname -s 00:33:27.369 02:39:17 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:33:27.369 02:39:17 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:33:27.369 02:39:17 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:33:27.369 02:39:17 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:33:27.369 02:39:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:27.369 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:33:27.369 02:39:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:33:27.369 02:39:17 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:33:27.369 02:39:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:27.369 02:39:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:27.369 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:33:27.628 ************************************ 00:33:27.628 START TEST compress_compdev 00:33:27.628 ************************************ 00:33:27.628 02:39:17 compress_compdev -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:33:27.628 * Looking for test storage... 00:33:27.628 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:33:27.628 02:39:17 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.628 02:39:17 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.628 02:39:17 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.628 02:39:17 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.628 02:39:17 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.628 02:39:17 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.628 02:39:17 compress_compdev -- paths/export.sh@5 -- # export PATH 00:33:27.628 02:39:17 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.628 02:39:17 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2069679 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2069679 00:33:27.628 02:39:17 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:33:27.628 02:39:17 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2069679 ']' 00:33:27.628 02:39:17 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.628 02:39:17 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:27.628 02:39:17 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.628 02:39:17 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:27.628 02:39:17 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:33:27.628 [2024-07-11 02:39:18.004130] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:27.628 [2024-07-11 02:39:18.004211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069679 ] 00:33:27.887 [2024-07-11 02:39:18.147403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:27.887 [2024-07-11 02:39:18.203004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.887 [2024-07-11 02:39:18.203009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.821 [2024-07-11 02:39:18.998691] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:33:28.821 02:39:19 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:28.821 02:39:19 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:33:28.821 02:39:19 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:33:28.821 02:39:19 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:28.821 02:39:19 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:32.108 [2024-07-11 02:39:22.116154] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x29f72a0 PMD being used: compress_qat 00:33:32.109 02:39:22 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:33:32.109 02:39:22 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:33:32.109 02:39:22 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:32.109 02:39:22 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:33:32.109 02:39:22 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:32.109 02:39:22 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:32.109 02:39:22 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:32.109 02:39:22 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:33:32.369 [ 00:33:32.369 { 00:33:32.369 "name": "Nvme0n1", 00:33:32.369 "aliases": [ 00:33:32.369 "7e7fbd2e-5147-49ab-a8bc-63f57805d046" 00:33:32.369 ], 00:33:32.369 "product_name": "NVMe disk", 00:33:32.369 "block_size": 512, 00:33:32.369 "num_blocks": 7814037168, 00:33:32.369 "uuid": "7e7fbd2e-5147-49ab-a8bc-63f57805d046", 00:33:32.369 "assigned_rate_limits": { 00:33:32.369 "rw_ios_per_sec": 0, 00:33:32.369 "rw_mbytes_per_sec": 0, 00:33:32.369 "r_mbytes_per_sec": 0, 00:33:32.369 "w_mbytes_per_sec": 0 00:33:32.369 }, 00:33:32.369 "claimed": false, 00:33:32.369 "zoned": false, 00:33:32.369 "supported_io_types": { 00:33:32.369 "read": true, 00:33:32.369 "write": true, 00:33:32.369 "unmap": true, 00:33:32.369 "flush": true, 00:33:32.369 "reset": true, 00:33:32.369 "nvme_admin": true, 00:33:32.369 "nvme_io": true, 00:33:32.369 "nvme_io_md": false, 00:33:32.369 "write_zeroes": true, 00:33:32.369 "zcopy": false, 00:33:32.369 "get_zone_info": false, 00:33:32.369 "zone_management": false, 00:33:32.369 "zone_append": false, 00:33:32.369 "compare": false, 00:33:32.369 "compare_and_write": false, 00:33:32.369 "abort": true, 00:33:32.369 "seek_hole": false, 00:33:32.369 "seek_data": false, 00:33:32.369 "copy": false, 00:33:32.369 "nvme_iov_md": false 00:33:32.369 }, 00:33:32.369 "driver_specific": { 00:33:32.369 "nvme": [ 00:33:32.369 { 00:33:32.369 "pci_address": "0000:1a:00.0", 00:33:32.369 "trid": { 00:33:32.369 "trtype": "PCIe", 00:33:32.369 "traddr": "0000:1a:00.0" 00:33:32.369 }, 00:33:32.369 "ctrlr_data": { 00:33:32.369 "cntlid": 0, 00:33:32.369 "vendor_id": "0x8086", 00:33:32.369 "model_number": "INTEL SSDPE2KX040T8", 00:33:32.369 "serial_number": "BTLJ8303085V4P0DGN", 00:33:32.369 "firmware_revision": "VDV10170", 00:33:32.369 "oacs": { 00:33:32.369 "security": 0, 00:33:32.369 "format": 1, 00:33:32.369 "firmware": 1, 00:33:32.369 "ns_manage": 1 00:33:32.369 }, 00:33:32.369 "multi_ctrlr": false, 00:33:32.369 "ana_reporting": false 00:33:32.369 }, 00:33:32.369 "vs": { 00:33:32.369 "nvme_version": "1.2" 00:33:32.369 }, 00:33:32.369 "ns_data": { 00:33:32.369 "id": 1, 00:33:32.369 "can_share": false 00:33:32.369 } 00:33:32.369 } 00:33:32.369 ], 00:33:32.369 "mp_policy": "active_passive" 00:33:32.369 } 00:33:32.369 } 00:33:32.369 ] 00:33:32.369 02:39:22 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:33:32.369 02:39:22 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:33:32.628 [2024-07-11 02:39:22.870405] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2845870 PMD being used: compress_qat 00:33:34.534 937b6b2d-e0a5-4caf-b64d-4b3ffd8826f0 00:33:34.534 02:39:24 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:33:34.793 177aa416-d49b-4178-8122-1683d43ddabb 00:33:34.793 02:39:25 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:33:34.793 02:39:25 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:33:34.793 02:39:25 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:34.793 02:39:25 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:33:34.793 02:39:25 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:34.793 02:39:25 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:34.793 02:39:25 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:35.052 02:39:25 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:33:35.311 [ 00:33:35.311 { 00:33:35.311 "name": "177aa416-d49b-4178-8122-1683d43ddabb", 00:33:35.311 "aliases": [ 00:33:35.311 "lvs0/lv0" 00:33:35.311 ], 00:33:35.311 "product_name": "Logical Volume", 00:33:35.311 "block_size": 512, 00:33:35.311 "num_blocks": 204800, 00:33:35.311 "uuid": "177aa416-d49b-4178-8122-1683d43ddabb", 00:33:35.311 "assigned_rate_limits": { 00:33:35.311 "rw_ios_per_sec": 0, 00:33:35.311 "rw_mbytes_per_sec": 0, 00:33:35.311 "r_mbytes_per_sec": 0, 00:33:35.311 "w_mbytes_per_sec": 0 00:33:35.311 }, 00:33:35.311 "claimed": false, 00:33:35.311 "zoned": false, 00:33:35.311 "supported_io_types": { 00:33:35.311 "read": true, 00:33:35.311 "write": true, 00:33:35.311 "unmap": true, 00:33:35.311 "flush": false, 00:33:35.311 "reset": true, 00:33:35.311 "nvme_admin": false, 00:33:35.311 "nvme_io": false, 00:33:35.311 "nvme_io_md": false, 00:33:35.311 "write_zeroes": true, 00:33:35.311 "zcopy": false, 00:33:35.311 "get_zone_info": false, 00:33:35.311 "zone_management": false, 00:33:35.311 "zone_append": false, 00:33:35.311 "compare": false, 00:33:35.311 "compare_and_write": false, 00:33:35.311 "abort": false, 00:33:35.311 "seek_hole": true, 00:33:35.311 "seek_data": true, 00:33:35.311 "copy": false, 00:33:35.311 "nvme_iov_md": false 00:33:35.311 }, 00:33:35.311 "driver_specific": { 00:33:35.311 "lvol": { 00:33:35.311 "lvol_store_uuid": "937b6b2d-e0a5-4caf-b64d-4b3ffd8826f0", 00:33:35.311 "base_bdev": "Nvme0n1", 00:33:35.311 "thin_provision": true, 00:33:35.311 "num_allocated_clusters": 0, 00:33:35.311 "snapshot": false, 00:33:35.311 "clone": false, 00:33:35.311 "esnap_clone": false 00:33:35.311 } 00:33:35.311 } 00:33:35.311 } 00:33:35.311 ] 00:33:35.311 02:39:25 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:33:35.311 02:39:25 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:33:35.311 02:39:25 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:33:35.570 [2024-07-11 02:39:25.772356] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:33:35.570 COMP_lvs0/lv0 00:33:35.570 02:39:25 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:33:35.570 02:39:25 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:33:35.570 02:39:25 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:35.570 02:39:25 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:33:35.570 02:39:25 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:35.570 02:39:25 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:35.570 02:39:25 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:35.828 02:39:26 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:33:36.087 [ 00:33:36.087 { 00:33:36.087 "name": "COMP_lvs0/lv0", 00:33:36.087 "aliases": [ 00:33:36.087 "3d634513-6fef-54cc-8080-18981bfbd1df" 00:33:36.087 ], 00:33:36.087 "product_name": "compress", 00:33:36.087 "block_size": 512, 00:33:36.087 "num_blocks": 200704, 00:33:36.087 "uuid": "3d634513-6fef-54cc-8080-18981bfbd1df", 00:33:36.087 "assigned_rate_limits": { 00:33:36.087 "rw_ios_per_sec": 0, 00:33:36.087 "rw_mbytes_per_sec": 0, 00:33:36.087 "r_mbytes_per_sec": 0, 00:33:36.087 "w_mbytes_per_sec": 0 00:33:36.087 }, 00:33:36.087 "claimed": false, 00:33:36.087 "zoned": false, 00:33:36.087 "supported_io_types": { 00:33:36.087 "read": true, 00:33:36.087 "write": true, 00:33:36.087 "unmap": false, 00:33:36.087 "flush": false, 00:33:36.087 "reset": false, 00:33:36.087 "nvme_admin": false, 00:33:36.087 "nvme_io": false, 00:33:36.087 "nvme_io_md": false, 00:33:36.087 "write_zeroes": true, 00:33:36.087 "zcopy": false, 00:33:36.087 "get_zone_info": false, 00:33:36.087 "zone_management": false, 00:33:36.087 "zone_append": false, 00:33:36.087 "compare": false, 00:33:36.087 "compare_and_write": false, 00:33:36.087 "abort": false, 00:33:36.087 "seek_hole": false, 00:33:36.087 "seek_data": false, 00:33:36.087 "copy": false, 00:33:36.087 "nvme_iov_md": false 00:33:36.087 }, 00:33:36.087 "driver_specific": { 00:33:36.087 "compress": { 00:33:36.087 "name": "COMP_lvs0/lv0", 00:33:36.087 "base_bdev_name": "177aa416-d49b-4178-8122-1683d43ddabb" 00:33:36.087 } 00:33:36.087 } 00:33:36.087 } 00:33:36.087 ] 00:33:36.087 02:39:26 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:33:36.087 02:39:26 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:36.087 [2024-07-11 02:39:26.430784] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fca8c1b15c0 PMD being used: compress_qat 00:33:36.087 [2024-07-11 02:39:26.434015] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x29eb420 PMD being used: compress_qat 00:33:36.087 Running I/O for 3 seconds... 00:33:39.404 00:33:39.404 Latency(us) 00:33:39.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.404 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:33:39.404 Verification LBA range: start 0x0 length 0x3100 00:33:39.404 COMP_lvs0/lv0 : 3.01 1667.23 6.51 0.00 0.00 19101.00 211.92 24276.81 00:33:39.404 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:33:39.404 Verification LBA range: start 0x3100 length 0x3100 00:33:39.404 COMP_lvs0/lv0 : 3.01 1738.00 6.79 0.00 0.00 18266.95 203.91 22681.15 00:33:39.404 =================================================================================================================== 00:33:39.404 Total : 3405.23 13.30 0.00 0.00 18675.07 203.91 24276.81 00:33:39.404 0 00:33:39.404 02:39:29 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:33:39.404 02:39:29 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:33:39.404 02:39:29 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:33:39.663 02:39:29 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:33:39.663 02:39:29 compress_compdev -- compress/compress.sh@78 -- # killprocess 2069679 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2069679 ']' 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2069679 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2069679 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2069679' 00:33:39.663 killing process with pid 2069679 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@967 -- # kill 2069679 00:33:39.663 Received shutdown signal, test time was about 3.000000 seconds 00:33:39.663 00:33:39.663 Latency(us) 00:33:39.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.663 =================================================================================================================== 00:33:39.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.663 02:39:29 compress_compdev -- common/autotest_common.sh@972 -- # wait 2069679 00:33:43.854 02:39:33 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:33:43.854 02:39:33 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:33:43.854 02:39:33 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2071759 00:33:43.854 02:39:33 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:43.854 02:39:33 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:33:43.854 02:39:33 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2071759 00:33:43.854 02:39:33 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2071759 ']' 00:33:43.854 02:39:33 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.854 02:39:33 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.854 02:39:33 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.854 02:39:33 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.854 02:39:33 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:33:43.854 [2024-07-11 02:39:33.925588] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:43.854 [2024-07-11 02:39:33.925650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071759 ] 00:33:43.854 [2024-07-11 02:39:34.053845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:43.854 [2024-07-11 02:39:34.121375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.854 [2024-07-11 02:39:34.121380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.790 [2024-07-11 02:39:34.945218] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:33:44.790 02:39:35 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.790 02:39:35 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:33:44.790 02:39:35 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:33:44.790 02:39:35 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:44.790 02:39:35 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:48.080 [2024-07-11 02:39:38.146415] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x137a2a0 PMD being used: compress_qat 00:33:48.080 02:39:38 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:33:48.080 02:39:38 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:33:48.080 02:39:38 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:48.080 02:39:38 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:33:48.080 02:39:38 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:48.080 02:39:38 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:48.080 02:39:38 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:48.080 02:39:38 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:33:48.366 [ 00:33:48.366 { 00:33:48.366 "name": "Nvme0n1", 00:33:48.366 "aliases": [ 00:33:48.366 "d707d09f-6e06-4c99-a9dc-94b3740cb951" 00:33:48.366 ], 00:33:48.366 "product_name": "NVMe disk", 00:33:48.366 "block_size": 512, 00:33:48.366 "num_blocks": 7814037168, 00:33:48.366 "uuid": "d707d09f-6e06-4c99-a9dc-94b3740cb951", 00:33:48.366 "assigned_rate_limits": { 00:33:48.366 "rw_ios_per_sec": 0, 00:33:48.366 "rw_mbytes_per_sec": 0, 00:33:48.366 "r_mbytes_per_sec": 0, 00:33:48.366 "w_mbytes_per_sec": 0 00:33:48.366 }, 00:33:48.366 "claimed": false, 00:33:48.366 "zoned": false, 00:33:48.366 "supported_io_types": { 00:33:48.366 "read": true, 00:33:48.366 "write": true, 00:33:48.366 "unmap": true, 00:33:48.366 "flush": true, 00:33:48.366 "reset": true, 00:33:48.366 "nvme_admin": true, 00:33:48.366 "nvme_io": true, 00:33:48.366 "nvme_io_md": false, 00:33:48.366 "write_zeroes": true, 00:33:48.366 "zcopy": false, 00:33:48.366 "get_zone_info": false, 00:33:48.366 "zone_management": false, 00:33:48.366 "zone_append": false, 00:33:48.366 "compare": false, 00:33:48.366 "compare_and_write": false, 00:33:48.366 "abort": true, 00:33:48.366 "seek_hole": false, 00:33:48.366 "seek_data": false, 00:33:48.366 "copy": false, 00:33:48.366 "nvme_iov_md": false 00:33:48.366 }, 00:33:48.366 "driver_specific": { 00:33:48.366 "nvme": [ 00:33:48.366 { 00:33:48.366 "pci_address": "0000:1a:00.0", 00:33:48.366 "trid": { 00:33:48.366 "trtype": "PCIe", 00:33:48.366 "traddr": "0000:1a:00.0" 00:33:48.366 }, 00:33:48.366 "ctrlr_data": { 00:33:48.366 "cntlid": 0, 00:33:48.366 "vendor_id": "0x8086", 00:33:48.366 "model_number": "INTEL SSDPE2KX040T8", 00:33:48.366 "serial_number": "BTLJ8303085V4P0DGN", 00:33:48.366 "firmware_revision": "VDV10170", 00:33:48.366 "oacs": { 00:33:48.366 "security": 0, 00:33:48.366 "format": 1, 00:33:48.366 "firmware": 1, 00:33:48.366 "ns_manage": 1 00:33:48.366 }, 00:33:48.366 "multi_ctrlr": false, 00:33:48.366 "ana_reporting": false 00:33:48.366 }, 00:33:48.366 "vs": { 00:33:48.366 "nvme_version": "1.2" 00:33:48.366 }, 00:33:48.366 "ns_data": { 00:33:48.366 "id": 1, 00:33:48.366 "can_share": false 00:33:48.366 } 00:33:48.366 } 00:33:48.366 ], 00:33:48.366 "mp_policy": "active_passive" 00:33:48.366 } 00:33:48.366 } 00:33:48.366 ] 00:33:48.366 02:39:38 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:33:48.366 02:39:38 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:33:48.625 [2024-07-11 02:39:38.945357] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x11c8870 PMD being used: compress_qat 00:33:50.531 5255576f-518b-43e0-8d7c-c29f77e08555 00:33:50.531 02:39:40 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:33:50.790 f6f8f2aa-a70d-4959-8f13-23b4294f8cdf 00:33:50.790 02:39:41 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:33:50.790 02:39:41 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:33:50.790 02:39:41 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:50.790 02:39:41 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:33:50.790 02:39:41 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:50.790 02:39:41 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:50.790 02:39:41 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:51.049 02:39:41 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:33:51.309 [ 00:33:51.309 { 00:33:51.309 "name": "f6f8f2aa-a70d-4959-8f13-23b4294f8cdf", 00:33:51.309 "aliases": [ 00:33:51.309 "lvs0/lv0" 00:33:51.309 ], 00:33:51.309 "product_name": "Logical Volume", 00:33:51.309 "block_size": 512, 00:33:51.309 "num_blocks": 204800, 00:33:51.309 "uuid": "f6f8f2aa-a70d-4959-8f13-23b4294f8cdf", 00:33:51.309 "assigned_rate_limits": { 00:33:51.309 "rw_ios_per_sec": 0, 00:33:51.309 "rw_mbytes_per_sec": 0, 00:33:51.309 "r_mbytes_per_sec": 0, 00:33:51.309 "w_mbytes_per_sec": 0 00:33:51.309 }, 00:33:51.309 "claimed": false, 00:33:51.309 "zoned": false, 00:33:51.309 "supported_io_types": { 00:33:51.309 "read": true, 00:33:51.309 "write": true, 00:33:51.309 "unmap": true, 00:33:51.309 "flush": false, 00:33:51.309 "reset": true, 00:33:51.309 "nvme_admin": false, 00:33:51.309 "nvme_io": false, 00:33:51.309 "nvme_io_md": false, 00:33:51.309 "write_zeroes": true, 00:33:51.309 "zcopy": false, 00:33:51.309 "get_zone_info": false, 00:33:51.309 "zone_management": false, 00:33:51.309 "zone_append": false, 00:33:51.309 "compare": false, 00:33:51.309 "compare_and_write": false, 00:33:51.309 "abort": false, 00:33:51.309 "seek_hole": true, 00:33:51.309 "seek_data": true, 00:33:51.309 "copy": false, 00:33:51.309 "nvme_iov_md": false 00:33:51.309 }, 00:33:51.309 "driver_specific": { 00:33:51.309 "lvol": { 00:33:51.309 "lvol_store_uuid": "5255576f-518b-43e0-8d7c-c29f77e08555", 00:33:51.309 "base_bdev": "Nvme0n1", 00:33:51.309 "thin_provision": true, 00:33:51.309 "num_allocated_clusters": 0, 00:33:51.309 "snapshot": false, 00:33:51.309 "clone": false, 00:33:51.309 "esnap_clone": false 00:33:51.309 } 00:33:51.309 } 00:33:51.309 } 00:33:51.309 ] 00:33:51.309 02:39:41 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:33:51.309 02:39:41 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:33:51.309 02:39:41 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:33:51.568 [2024-07-11 02:39:41.898066] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:33:51.568 COMP_lvs0/lv0 00:33:51.568 02:39:41 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:33:51.568 02:39:41 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:33:51.568 02:39:41 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:51.568 02:39:41 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:33:51.568 02:39:41 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:51.568 02:39:41 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:51.568 02:39:41 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:51.828 02:39:42 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:33:52.087 [ 00:33:52.087 { 00:33:52.087 "name": "COMP_lvs0/lv0", 00:33:52.087 "aliases": [ 00:33:52.087 "79013f75-fbb8-5978-98cc-c2812d1d6233" 00:33:52.087 ], 00:33:52.087 "product_name": "compress", 00:33:52.087 "block_size": 512, 00:33:52.087 "num_blocks": 200704, 00:33:52.087 "uuid": "79013f75-fbb8-5978-98cc-c2812d1d6233", 00:33:52.087 "assigned_rate_limits": { 00:33:52.087 "rw_ios_per_sec": 0, 00:33:52.087 "rw_mbytes_per_sec": 0, 00:33:52.087 "r_mbytes_per_sec": 0, 00:33:52.087 "w_mbytes_per_sec": 0 00:33:52.087 }, 00:33:52.087 "claimed": false, 00:33:52.087 "zoned": false, 00:33:52.087 "supported_io_types": { 00:33:52.087 "read": true, 00:33:52.087 "write": true, 00:33:52.087 "unmap": false, 00:33:52.087 "flush": false, 00:33:52.087 "reset": false, 00:33:52.087 "nvme_admin": false, 00:33:52.087 "nvme_io": false, 00:33:52.087 "nvme_io_md": false, 00:33:52.087 "write_zeroes": true, 00:33:52.087 "zcopy": false, 00:33:52.087 "get_zone_info": false, 00:33:52.087 "zone_management": false, 00:33:52.087 "zone_append": false, 00:33:52.087 "compare": false, 00:33:52.087 "compare_and_write": false, 00:33:52.087 "abort": false, 00:33:52.087 "seek_hole": false, 00:33:52.087 "seek_data": false, 00:33:52.087 "copy": false, 00:33:52.087 "nvme_iov_md": false 00:33:52.087 }, 00:33:52.087 "driver_specific": { 00:33:52.087 "compress": { 00:33:52.087 "name": "COMP_lvs0/lv0", 00:33:52.087 "base_bdev_name": "f6f8f2aa-a70d-4959-8f13-23b4294f8cdf" 00:33:52.087 } 00:33:52.087 } 00:33:52.087 } 00:33:52.087 ] 00:33:52.087 02:39:42 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:33:52.087 02:39:42 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:52.347 [2024-07-11 02:39:42.556868] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc5941b15c0 PMD being used: compress_qat 00:33:52.347 [2024-07-11 02:39:42.560213] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x136e490 PMD being used: compress_qat 00:33:52.347 Running I/O for 3 seconds... 00:33:55.638 00:33:55.638 Latency(us) 00:33:55.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.638 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:33:55.638 Verification LBA range: start 0x0 length 0x3100 00:33:55.638 COMP_lvs0/lv0 : 3.01 1674.79 6.54 0.00 0.00 19000.39 252.88 21427.42 00:33:55.638 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:33:55.638 Verification LBA range: start 0x3100 length 0x3100 00:33:55.638 COMP_lvs0/lv0 : 3.01 1744.42 6.81 0.00 0.00 18227.07 208.36 22111.28 00:33:55.638 =================================================================================================================== 00:33:55.638 Total : 3419.21 13.36 0.00 0.00 18606.03 208.36 22111.28 00:33:55.638 0 00:33:55.638 02:39:45 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:33:55.638 02:39:45 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:33:55.638 02:39:45 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:33:55.897 02:39:46 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:33:55.897 02:39:46 compress_compdev -- compress/compress.sh@78 -- # killprocess 2071759 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2071759 ']' 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2071759 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2071759 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2071759' 00:33:55.897 killing process with pid 2071759 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@967 -- # kill 2071759 00:33:55.897 Received shutdown signal, test time was about 3.000000 seconds 00:33:55.897 00:33:55.897 Latency(us) 00:33:55.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.897 =================================================================================================================== 00:33:55.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:55.897 02:39:46 compress_compdev -- common/autotest_common.sh@972 -- # wait 2071759 00:34:00.092 02:39:50 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:34:00.092 02:39:50 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:34:00.092 02:39:50 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2073726 00:34:00.092 02:39:50 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:34:00.092 02:39:50 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:00.092 02:39:50 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2073726 00:34:00.092 02:39:50 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2073726 ']' 00:34:00.092 02:39:50 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.092 02:39:50 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:00.092 02:39:50 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.092 02:39:50 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:00.092 02:39:50 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:34:00.092 [2024-07-11 02:39:50.123750] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:00.092 [2024-07-11 02:39:50.123829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073726 ] 00:34:00.092 [2024-07-11 02:39:50.267768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:00.092 [2024-07-11 02:39:50.323346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:00.092 [2024-07-11 02:39:50.323351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.030 [2024-07-11 02:39:51.150112] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:34:01.030 02:39:51 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:01.030 02:39:51 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:34:01.030 02:39:51 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:34:01.030 02:39:51 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:01.030 02:39:51 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:04.373 [2024-07-11 02:39:54.354590] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x193b2a0 PMD being used: compress_qat 00:34:04.373 02:39:54 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:04.373 02:39:54 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:34:04.373 02:39:54 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:04.373 02:39:54 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:04.373 02:39:54 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:04.373 02:39:54 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:04.374 02:39:54 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:04.374 02:39:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:04.634 [ 00:34:04.634 { 00:34:04.634 "name": "Nvme0n1", 00:34:04.634 "aliases": [ 00:34:04.634 "14c3044b-a49b-41af-9278-86837d99c6e5" 00:34:04.634 ], 00:34:04.634 "product_name": "NVMe disk", 00:34:04.634 "block_size": 512, 00:34:04.634 "num_blocks": 7814037168, 00:34:04.634 "uuid": "14c3044b-a49b-41af-9278-86837d99c6e5", 00:34:04.634 "assigned_rate_limits": { 00:34:04.634 "rw_ios_per_sec": 0, 00:34:04.634 "rw_mbytes_per_sec": 0, 00:34:04.634 "r_mbytes_per_sec": 0, 00:34:04.634 "w_mbytes_per_sec": 0 00:34:04.634 }, 00:34:04.634 "claimed": false, 00:34:04.634 "zoned": false, 00:34:04.634 "supported_io_types": { 00:34:04.634 "read": true, 00:34:04.634 "write": true, 00:34:04.634 "unmap": true, 00:34:04.634 "flush": true, 00:34:04.634 "reset": true, 00:34:04.634 "nvme_admin": true, 00:34:04.634 "nvme_io": true, 00:34:04.634 "nvme_io_md": false, 00:34:04.634 "write_zeroes": true, 00:34:04.634 "zcopy": false, 00:34:04.634 "get_zone_info": false, 00:34:04.634 "zone_management": false, 00:34:04.634 "zone_append": false, 00:34:04.634 "compare": false, 00:34:04.634 "compare_and_write": false, 00:34:04.634 "abort": true, 00:34:04.634 "seek_hole": false, 00:34:04.634 "seek_data": false, 00:34:04.634 "copy": false, 00:34:04.634 "nvme_iov_md": false 00:34:04.634 }, 00:34:04.634 "driver_specific": { 00:34:04.634 "nvme": [ 00:34:04.634 { 00:34:04.634 "pci_address": "0000:1a:00.0", 00:34:04.634 "trid": { 00:34:04.634 "trtype": "PCIe", 00:34:04.634 "traddr": "0000:1a:00.0" 00:34:04.634 }, 00:34:04.634 "ctrlr_data": { 00:34:04.634 "cntlid": 0, 00:34:04.634 "vendor_id": "0x8086", 00:34:04.634 "model_number": "INTEL SSDPE2KX040T8", 00:34:04.634 "serial_number": "BTLJ8303085V4P0DGN", 00:34:04.634 "firmware_revision": "VDV10170", 00:34:04.634 "oacs": { 00:34:04.634 "security": 0, 00:34:04.634 "format": 1, 00:34:04.634 "firmware": 1, 00:34:04.634 "ns_manage": 1 00:34:04.634 }, 00:34:04.634 "multi_ctrlr": false, 00:34:04.634 "ana_reporting": false 00:34:04.634 }, 00:34:04.634 "vs": { 00:34:04.634 "nvme_version": "1.2" 00:34:04.634 }, 00:34:04.634 "ns_data": { 00:34:04.634 "id": 1, 00:34:04.634 "can_share": false 00:34:04.634 } 00:34:04.634 } 00:34:04.634 ], 00:34:04.634 "mp_policy": "active_passive" 00:34:04.634 } 00:34:04.634 } 00:34:04.634 ] 00:34:04.634 02:39:54 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:04.634 02:39:54 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:04.894 [2024-07-11 02:39:55.153465] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1789870 PMD being used: compress_qat 00:34:06.801 6f127eba-622f-41ca-913a-e6eeaea5cf52 00:34:06.801 02:39:57 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:07.060 69adb66c-9b42-4e25-acd8-f11646e5ebc6 00:34:07.060 02:39:57 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:07.060 02:39:57 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:34:07.060 02:39:57 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:07.060 02:39:57 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:07.060 02:39:57 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:07.060 02:39:57 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:07.060 02:39:57 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:07.320 02:39:57 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:07.580 [ 00:34:07.580 { 00:34:07.580 "name": "69adb66c-9b42-4e25-acd8-f11646e5ebc6", 00:34:07.580 "aliases": [ 00:34:07.580 "lvs0/lv0" 00:34:07.580 ], 00:34:07.580 "product_name": "Logical Volume", 00:34:07.580 "block_size": 512, 00:34:07.580 "num_blocks": 204800, 00:34:07.580 "uuid": "69adb66c-9b42-4e25-acd8-f11646e5ebc6", 00:34:07.580 "assigned_rate_limits": { 00:34:07.580 "rw_ios_per_sec": 0, 00:34:07.580 "rw_mbytes_per_sec": 0, 00:34:07.580 "r_mbytes_per_sec": 0, 00:34:07.580 "w_mbytes_per_sec": 0 00:34:07.580 }, 00:34:07.580 "claimed": false, 00:34:07.580 "zoned": false, 00:34:07.580 "supported_io_types": { 00:34:07.580 "read": true, 00:34:07.580 "write": true, 00:34:07.580 "unmap": true, 00:34:07.580 "flush": false, 00:34:07.580 "reset": true, 00:34:07.580 "nvme_admin": false, 00:34:07.580 "nvme_io": false, 00:34:07.580 "nvme_io_md": false, 00:34:07.580 "write_zeroes": true, 00:34:07.580 "zcopy": false, 00:34:07.580 "get_zone_info": false, 00:34:07.580 "zone_management": false, 00:34:07.580 "zone_append": false, 00:34:07.580 "compare": false, 00:34:07.580 "compare_and_write": false, 00:34:07.580 "abort": false, 00:34:07.580 "seek_hole": true, 00:34:07.580 "seek_data": true, 00:34:07.580 "copy": false, 00:34:07.580 "nvme_iov_md": false 00:34:07.580 }, 00:34:07.580 "driver_specific": { 00:34:07.580 "lvol": { 00:34:07.580 "lvol_store_uuid": "6f127eba-622f-41ca-913a-e6eeaea5cf52", 00:34:07.580 "base_bdev": "Nvme0n1", 00:34:07.580 "thin_provision": true, 00:34:07.580 "num_allocated_clusters": 0, 00:34:07.580 "snapshot": false, 00:34:07.580 "clone": false, 00:34:07.580 "esnap_clone": false 00:34:07.580 } 00:34:07.580 } 00:34:07.580 } 00:34:07.580 ] 00:34:07.580 02:39:57 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:07.580 02:39:57 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:34:07.580 02:39:57 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:34:07.839 [2024-07-11 02:39:58.036595] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:07.839 COMP_lvs0/lv0 00:34:07.839 02:39:58 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:07.839 02:39:58 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:34:07.839 02:39:58 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:07.839 02:39:58 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:07.839 02:39:58 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:07.839 02:39:58 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:07.839 02:39:58 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:08.098 02:39:58 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:08.098 [ 00:34:08.098 { 00:34:08.098 "name": "COMP_lvs0/lv0", 00:34:08.098 "aliases": [ 00:34:08.098 "8d551b5d-54ae-5333-8095-9970a9204909" 00:34:08.098 ], 00:34:08.098 "product_name": "compress", 00:34:08.098 "block_size": 4096, 00:34:08.098 "num_blocks": 25088, 00:34:08.098 "uuid": "8d551b5d-54ae-5333-8095-9970a9204909", 00:34:08.098 "assigned_rate_limits": { 00:34:08.098 "rw_ios_per_sec": 0, 00:34:08.098 "rw_mbytes_per_sec": 0, 00:34:08.098 "r_mbytes_per_sec": 0, 00:34:08.098 "w_mbytes_per_sec": 0 00:34:08.098 }, 00:34:08.098 "claimed": false, 00:34:08.098 "zoned": false, 00:34:08.098 "supported_io_types": { 00:34:08.098 "read": true, 00:34:08.098 "write": true, 00:34:08.098 "unmap": false, 00:34:08.098 "flush": false, 00:34:08.098 "reset": false, 00:34:08.098 "nvme_admin": false, 00:34:08.098 "nvme_io": false, 00:34:08.098 "nvme_io_md": false, 00:34:08.098 "write_zeroes": true, 00:34:08.098 "zcopy": false, 00:34:08.098 "get_zone_info": false, 00:34:08.098 "zone_management": false, 00:34:08.098 "zone_append": false, 00:34:08.098 "compare": false, 00:34:08.098 "compare_and_write": false, 00:34:08.098 "abort": false, 00:34:08.098 "seek_hole": false, 00:34:08.098 "seek_data": false, 00:34:08.098 "copy": false, 00:34:08.098 "nvme_iov_md": false 00:34:08.098 }, 00:34:08.098 "driver_specific": { 00:34:08.098 "compress": { 00:34:08.098 "name": "COMP_lvs0/lv0", 00:34:08.098 "base_bdev_name": "69adb66c-9b42-4e25-acd8-f11646e5ebc6" 00:34:08.098 } 00:34:08.098 } 00:34:08.098 } 00:34:08.098 ] 00:34:08.098 02:39:58 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:08.098 02:39:58 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:08.357 [2024-07-11 02:39:58.558974] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f20e81b15c0 PMD being used: compress_qat 00:34:08.357 [2024-07-11 02:39:58.562252] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x192f490 PMD being used: compress_qat 00:34:08.357 Running I/O for 3 seconds... 00:34:11.644 00:34:11.644 Latency(us) 00:34:11.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.644 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:34:11.644 Verification LBA range: start 0x0 length 0x3100 00:34:11.644 COMP_lvs0/lv0 : 3.01 1660.73 6.49 0.00 0.00 19178.85 265.35 24846.69 00:34:11.644 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:34:11.644 Verification LBA range: start 0x3100 length 0x3100 00:34:11.644 COMP_lvs0/lv0 : 3.01 1727.81 6.75 0.00 0.00 18392.24 243.98 24048.86 00:34:11.644 =================================================================================================================== 00:34:11.644 Total : 3388.53 13.24 0.00 0.00 18777.69 243.98 24846.69 00:34:11.644 0 00:34:11.644 02:40:01 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:34:11.644 02:40:01 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:34:11.644 02:40:01 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:34:11.904 02:40:02 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:34:11.904 02:40:02 compress_compdev -- compress/compress.sh@78 -- # killprocess 2073726 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2073726 ']' 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2073726 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2073726 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2073726' 00:34:11.904 killing process with pid 2073726 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@967 -- # kill 2073726 00:34:11.904 Received shutdown signal, test time was about 3.000000 seconds 00:34:11.904 00:34:11.904 Latency(us) 00:34:11.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.904 =================================================================================================================== 00:34:11.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.904 02:40:02 compress_compdev -- common/autotest_common.sh@972 -- # wait 2073726 00:34:16.089 02:40:06 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:34:16.089 02:40:06 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:34:16.089 02:40:06 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=2075830 00:34:16.089 02:40:06 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:16.090 02:40:06 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:34:16.090 02:40:06 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 2075830 00:34:16.090 02:40:06 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2075830 ']' 00:34:16.090 02:40:06 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.090 02:40:06 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:16.090 02:40:06 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.090 02:40:06 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:16.090 02:40:06 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:34:16.090 [2024-07-11 02:40:06.085032] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:16.090 [2024-07-11 02:40:06.085102] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075830 ] 00:34:16.090 [2024-07-11 02:40:06.220872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.090 [2024-07-11 02:40:06.272941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.090 [2024-07-11 02:40:06.273042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.090 [2024-07-11 02:40:06.273042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.658 [2024-07-11 02:40:06.922566] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:34:16.658 02:40:07 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:16.658 02:40:07 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:34:16.658 02:40:07 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:34:16.658 02:40:07 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:16.658 02:40:07 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:19.947 [2024-07-11 02:40:10.140521] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xfdfc70 PMD being used: compress_qat 00:34:19.947 02:40:10 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:19.947 02:40:10 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:34:19.947 02:40:10 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:19.947 02:40:10 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:19.947 02:40:10 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:19.947 02:40:10 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:19.947 02:40:10 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:20.207 02:40:10 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:20.467 [ 00:34:20.467 { 00:34:20.467 "name": "Nvme0n1", 00:34:20.467 "aliases": [ 00:34:20.467 "22fcea22-a5e2-4e68-a3ca-0a1d16b64261" 00:34:20.467 ], 00:34:20.467 "product_name": "NVMe disk", 00:34:20.467 "block_size": 512, 00:34:20.467 "num_blocks": 7814037168, 00:34:20.467 "uuid": "22fcea22-a5e2-4e68-a3ca-0a1d16b64261", 00:34:20.467 "assigned_rate_limits": { 00:34:20.467 "rw_ios_per_sec": 0, 00:34:20.467 "rw_mbytes_per_sec": 0, 00:34:20.467 "r_mbytes_per_sec": 0, 00:34:20.467 "w_mbytes_per_sec": 0 00:34:20.467 }, 00:34:20.467 "claimed": false, 00:34:20.467 "zoned": false, 00:34:20.467 "supported_io_types": { 00:34:20.467 "read": true, 00:34:20.467 "write": true, 00:34:20.467 "unmap": true, 00:34:20.467 "flush": true, 00:34:20.467 "reset": true, 00:34:20.467 "nvme_admin": true, 00:34:20.467 "nvme_io": true, 00:34:20.467 "nvme_io_md": false, 00:34:20.467 "write_zeroes": true, 00:34:20.467 "zcopy": false, 00:34:20.467 "get_zone_info": false, 00:34:20.467 "zone_management": false, 00:34:20.467 "zone_append": false, 00:34:20.467 "compare": false, 00:34:20.467 "compare_and_write": false, 00:34:20.467 "abort": true, 00:34:20.467 "seek_hole": false, 00:34:20.467 "seek_data": false, 00:34:20.467 "copy": false, 00:34:20.467 "nvme_iov_md": false 00:34:20.467 }, 00:34:20.467 "driver_specific": { 00:34:20.467 "nvme": [ 00:34:20.467 { 00:34:20.467 "pci_address": "0000:1a:00.0", 00:34:20.467 "trid": { 00:34:20.467 "trtype": "PCIe", 00:34:20.467 "traddr": "0000:1a:00.0" 00:34:20.467 }, 00:34:20.467 "ctrlr_data": { 00:34:20.467 "cntlid": 0, 00:34:20.467 "vendor_id": "0x8086", 00:34:20.467 "model_number": "INTEL SSDPE2KX040T8", 00:34:20.467 "serial_number": "BTLJ8303085V4P0DGN", 00:34:20.467 "firmware_revision": "VDV10170", 00:34:20.467 "oacs": { 00:34:20.467 "security": 0, 00:34:20.467 "format": 1, 00:34:20.467 "firmware": 1, 00:34:20.467 "ns_manage": 1 00:34:20.467 }, 00:34:20.467 "multi_ctrlr": false, 00:34:20.467 "ana_reporting": false 00:34:20.467 }, 00:34:20.467 "vs": { 00:34:20.467 "nvme_version": "1.2" 00:34:20.467 }, 00:34:20.467 "ns_data": { 00:34:20.467 "id": 1, 00:34:20.467 "can_share": false 00:34:20.467 } 00:34:20.467 } 00:34:20.467 ], 00:34:20.467 "mp_policy": "active_passive" 00:34:20.467 } 00:34:20.467 } 00:34:20.467 ] 00:34:20.467 02:40:10 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:20.467 02:40:10 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:20.726 [2024-07-11 02:40:10.927297] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe2dd20 PMD being used: compress_qat 00:34:22.655 7b22ff6d-c52b-4513-8699-832738f68102 00:34:22.655 02:40:12 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:22.655 5eb3e561-be9a-4838-b5ba-68143684e72d 00:34:22.655 02:40:12 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:22.655 02:40:12 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:34:22.655 02:40:12 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:22.655 02:40:12 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:22.655 02:40:12 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:22.655 02:40:12 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:22.655 02:40:12 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:22.914 02:40:13 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:22.914 [ 00:34:22.914 { 00:34:22.914 "name": "5eb3e561-be9a-4838-b5ba-68143684e72d", 00:34:22.914 "aliases": [ 00:34:22.914 "lvs0/lv0" 00:34:22.914 ], 00:34:22.914 "product_name": "Logical Volume", 00:34:22.914 "block_size": 512, 00:34:22.914 "num_blocks": 204800, 00:34:22.914 "uuid": "5eb3e561-be9a-4838-b5ba-68143684e72d", 00:34:22.914 "assigned_rate_limits": { 00:34:22.914 "rw_ios_per_sec": 0, 00:34:22.914 "rw_mbytes_per_sec": 0, 00:34:22.914 "r_mbytes_per_sec": 0, 00:34:22.914 "w_mbytes_per_sec": 0 00:34:22.914 }, 00:34:22.914 "claimed": false, 00:34:22.914 "zoned": false, 00:34:22.914 "supported_io_types": { 00:34:22.914 "read": true, 00:34:22.914 "write": true, 00:34:22.914 "unmap": true, 00:34:22.914 "flush": false, 00:34:22.914 "reset": true, 00:34:22.914 "nvme_admin": false, 00:34:22.914 "nvme_io": false, 00:34:22.914 "nvme_io_md": false, 00:34:22.914 "write_zeroes": true, 00:34:22.914 "zcopy": false, 00:34:22.914 "get_zone_info": false, 00:34:22.914 "zone_management": false, 00:34:22.914 "zone_append": false, 00:34:22.914 "compare": false, 00:34:22.914 "compare_and_write": false, 00:34:22.914 "abort": false, 00:34:22.914 "seek_hole": true, 00:34:22.914 "seek_data": true, 00:34:22.914 "copy": false, 00:34:22.914 "nvme_iov_md": false 00:34:22.914 }, 00:34:22.914 "driver_specific": { 00:34:22.914 "lvol": { 00:34:22.914 "lvol_store_uuid": "7b22ff6d-c52b-4513-8699-832738f68102", 00:34:22.914 "base_bdev": "Nvme0n1", 00:34:22.914 "thin_provision": true, 00:34:22.914 "num_allocated_clusters": 0, 00:34:22.914 "snapshot": false, 00:34:22.914 "clone": false, 00:34:22.914 "esnap_clone": false 00:34:22.914 } 00:34:22.914 } 00:34:22.914 } 00:34:22.914 ] 00:34:22.914 02:40:13 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:22.914 02:40:13 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:22.914 02:40:13 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:23.173 [2024-07-11 02:40:13.483373] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:23.173 COMP_lvs0/lv0 00:34:23.173 02:40:13 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:23.173 02:40:13 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:34:23.173 02:40:13 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:23.173 02:40:13 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:23.173 02:40:13 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:23.173 02:40:13 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:23.173 02:40:13 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:23.432 02:40:13 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:23.691 [ 00:34:23.691 { 00:34:23.691 "name": "COMP_lvs0/lv0", 00:34:23.691 "aliases": [ 00:34:23.691 "31cd67bc-2d87-5583-b9e9-82a84da401d3" 00:34:23.691 ], 00:34:23.691 "product_name": "compress", 00:34:23.691 "block_size": 512, 00:34:23.691 "num_blocks": 200704, 00:34:23.691 "uuid": "31cd67bc-2d87-5583-b9e9-82a84da401d3", 00:34:23.691 "assigned_rate_limits": { 00:34:23.691 "rw_ios_per_sec": 0, 00:34:23.691 "rw_mbytes_per_sec": 0, 00:34:23.691 "r_mbytes_per_sec": 0, 00:34:23.691 "w_mbytes_per_sec": 0 00:34:23.691 }, 00:34:23.691 "claimed": false, 00:34:23.691 "zoned": false, 00:34:23.691 "supported_io_types": { 00:34:23.691 "read": true, 00:34:23.691 "write": true, 00:34:23.691 "unmap": false, 00:34:23.691 "flush": false, 00:34:23.691 "reset": false, 00:34:23.691 "nvme_admin": false, 00:34:23.691 "nvme_io": false, 00:34:23.691 "nvme_io_md": false, 00:34:23.691 "write_zeroes": true, 00:34:23.691 "zcopy": false, 00:34:23.691 "get_zone_info": false, 00:34:23.691 "zone_management": false, 00:34:23.691 "zone_append": false, 00:34:23.691 "compare": false, 00:34:23.691 "compare_and_write": false, 00:34:23.691 "abort": false, 00:34:23.691 "seek_hole": false, 00:34:23.691 "seek_data": false, 00:34:23.691 "copy": false, 00:34:23.691 "nvme_iov_md": false 00:34:23.691 }, 00:34:23.691 "driver_specific": { 00:34:23.691 "compress": { 00:34:23.691 "name": "COMP_lvs0/lv0", 00:34:23.691 "base_bdev_name": "5eb3e561-be9a-4838-b5ba-68143684e72d" 00:34:23.691 } 00:34:23.691 } 00:34:23.691 } 00:34:23.691 ] 00:34:23.691 02:40:13 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:23.691 02:40:13 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:23.691 [2024-07-11 02:40:14.012141] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f24141b1350 PMD being used: compress_qat 00:34:23.691 I/O targets: 00:34:23.691 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:34:23.691 00:34:23.691 00:34:23.691 CUnit - A unit testing framework for C - Version 2.1-3 00:34:23.691 http://cunit.sourceforge.net/ 00:34:23.691 00:34:23.691 00:34:23.691 Suite: bdevio tests on: COMP_lvs0/lv0 00:34:23.691 Test: blockdev write read block ...passed 00:34:23.691 Test: blockdev write zeroes read block ...passed 00:34:23.691 Test: blockdev write zeroes read no split ...passed 00:34:23.691 Test: blockdev write zeroes read split ...passed 00:34:23.950 Test: blockdev write zeroes read split partial ...passed 00:34:23.950 Test: blockdev reset ...[2024-07-11 02:40:14.133030] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:34:23.950 passed 00:34:23.950 Test: blockdev write read 8 blocks ...passed 00:34:23.950 Test: blockdev write read size > 128k ...passed 00:34:23.950 Test: blockdev write read invalid size ...passed 00:34:23.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:23.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:23.950 Test: blockdev write read max offset ...passed 00:34:23.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:23.950 Test: blockdev writev readv 8 blocks ...passed 00:34:23.950 Test: blockdev writev readv 30 x 1block ...passed 00:34:23.950 Test: blockdev writev readv block ...passed 00:34:23.950 Test: blockdev writev readv size > 128k ...passed 00:34:23.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:23.950 Test: blockdev comparev and writev ...passed 00:34:23.950 Test: blockdev nvme passthru rw ...passed 00:34:23.950 Test: blockdev nvme passthru vendor specific ...passed 00:34:23.950 Test: blockdev nvme admin passthru ...passed 00:34:23.950 Test: blockdev copy ...passed 00:34:23.950 00:34:23.951 Run Summary: Type Total Ran Passed Failed Inactive 00:34:23.951 suites 1 1 n/a 0 0 00:34:23.951 tests 23 23 23 0 0 00:34:23.951 asserts 130 130 130 0 n/a 00:34:23.951 00:34:23.951 Elapsed time = 0.288 seconds 00:34:23.951 0 00:34:23.951 02:40:14 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:34:23.951 02:40:14 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:34:24.209 02:40:14 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:34:24.209 02:40:14 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:34:24.209 02:40:14 compress_compdev -- compress/compress.sh@62 -- # killprocess 2075830 00:34:24.209 02:40:14 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2075830 ']' 00:34:24.209 02:40:14 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2075830 00:34:24.210 02:40:14 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:34:24.210 02:40:14 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:24.468 02:40:14 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2075830 00:34:24.468 02:40:14 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:24.468 02:40:14 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:24.468 02:40:14 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2075830' 00:34:24.468 killing process with pid 2075830 00:34:24.468 02:40:14 compress_compdev -- common/autotest_common.sh@967 -- # kill 2075830 00:34:24.468 02:40:14 compress_compdev -- common/autotest_common.sh@972 -- # wait 2075830 00:34:28.659 02:40:18 compress_compdev -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:34:28.659 02:40:18 compress_compdev -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:34:28.659 02:40:18 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:34:28.659 02:40:18 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2077427 00:34:28.659 02:40:18 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:28.659 02:40:18 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:34:28.659 02:40:18 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2077427 00:34:28.659 02:40:18 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2077427 ']' 00:34:28.659 02:40:18 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.659 02:40:18 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:28.659 02:40:18 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.659 02:40:18 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:28.659 02:40:18 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:34:28.659 [2024-07-11 02:40:18.565550] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:28.659 [2024-07-11 02:40:18.565633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077427 ] 00:34:28.659 [2024-07-11 02:40:18.714581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:28.659 [2024-07-11 02:40:18.783610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:28.659 [2024-07-11 02:40:18.783617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.226 [2024-07-11 02:40:19.621243] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:34:29.485 02:40:19 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:29.485 02:40:19 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:34:29.485 02:40:19 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:34:29.485 02:40:19 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:29.485 02:40:19 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:32.778 [2024-07-11 02:40:22.821933] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2a022a0 PMD being used: compress_qat 00:34:32.778 02:40:22 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:32.778 02:40:22 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:34:32.778 02:40:22 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:32.778 02:40:22 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:32.778 02:40:22 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:32.778 02:40:22 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:32.778 02:40:22 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:32.778 02:40:23 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:33.037 [ 00:34:33.037 { 00:34:33.037 "name": "Nvme0n1", 00:34:33.037 "aliases": [ 00:34:33.037 "a8031dfe-b384-4545-bb57-cd35ca68e80f" 00:34:33.037 ], 00:34:33.037 "product_name": "NVMe disk", 00:34:33.037 "block_size": 512, 00:34:33.037 "num_blocks": 7814037168, 00:34:33.037 "uuid": "a8031dfe-b384-4545-bb57-cd35ca68e80f", 00:34:33.037 "assigned_rate_limits": { 00:34:33.037 "rw_ios_per_sec": 0, 00:34:33.037 "rw_mbytes_per_sec": 0, 00:34:33.037 "r_mbytes_per_sec": 0, 00:34:33.037 "w_mbytes_per_sec": 0 00:34:33.037 }, 00:34:33.037 "claimed": false, 00:34:33.037 "zoned": false, 00:34:33.037 "supported_io_types": { 00:34:33.037 "read": true, 00:34:33.037 "write": true, 00:34:33.037 "unmap": true, 00:34:33.037 "flush": true, 00:34:33.037 "reset": true, 00:34:33.037 "nvme_admin": true, 00:34:33.037 "nvme_io": true, 00:34:33.037 "nvme_io_md": false, 00:34:33.037 "write_zeroes": true, 00:34:33.037 "zcopy": false, 00:34:33.037 "get_zone_info": false, 00:34:33.037 "zone_management": false, 00:34:33.037 "zone_append": false, 00:34:33.037 "compare": false, 00:34:33.037 "compare_and_write": false, 00:34:33.037 "abort": true, 00:34:33.037 "seek_hole": false, 00:34:33.037 "seek_data": false, 00:34:33.038 "copy": false, 00:34:33.038 "nvme_iov_md": false 00:34:33.038 }, 00:34:33.038 "driver_specific": { 00:34:33.038 "nvme": [ 00:34:33.038 { 00:34:33.038 "pci_address": "0000:1a:00.0", 00:34:33.038 "trid": { 00:34:33.038 "trtype": "PCIe", 00:34:33.038 "traddr": "0000:1a:00.0" 00:34:33.038 }, 00:34:33.038 "ctrlr_data": { 00:34:33.038 "cntlid": 0, 00:34:33.038 "vendor_id": "0x8086", 00:34:33.038 "model_number": "INTEL SSDPE2KX040T8", 00:34:33.038 "serial_number": "BTLJ8303085V4P0DGN", 00:34:33.038 "firmware_revision": "VDV10170", 00:34:33.038 "oacs": { 00:34:33.038 "security": 0, 00:34:33.038 "format": 1, 00:34:33.038 "firmware": 1, 00:34:33.038 "ns_manage": 1 00:34:33.038 }, 00:34:33.038 "multi_ctrlr": false, 00:34:33.038 "ana_reporting": false 00:34:33.038 }, 00:34:33.038 "vs": { 00:34:33.038 "nvme_version": "1.2" 00:34:33.038 }, 00:34:33.038 "ns_data": { 00:34:33.038 "id": 1, 00:34:33.038 "can_share": false 00:34:33.038 } 00:34:33.038 } 00:34:33.038 ], 00:34:33.038 "mp_policy": "active_passive" 00:34:33.038 } 00:34:33.038 } 00:34:33.038 ] 00:34:33.038 02:40:23 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:33.038 02:40:23 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:33.038 [2024-07-11 02:40:23.416103] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2850870 PMD being used: compress_qat 00:34:34.943 c8e39754-65c8-453a-82be-fa11f7752cb9 00:34:34.943 02:40:25 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:35.202 e1d46fb5-d236-462b-aaee-a60743cc7c73 00:34:35.202 02:40:25 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:35.202 02:40:25 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:34:35.202 02:40:25 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:35.202 02:40:25 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:35.202 02:40:25 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:35.202 02:40:25 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:35.202 02:40:25 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:35.461 02:40:25 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:35.720 [ 00:34:35.720 { 00:34:35.720 "name": "e1d46fb5-d236-462b-aaee-a60743cc7c73", 00:34:35.720 "aliases": [ 00:34:35.720 "lvs0/lv0" 00:34:35.720 ], 00:34:35.720 "product_name": "Logical Volume", 00:34:35.720 "block_size": 512, 00:34:35.720 "num_blocks": 204800, 00:34:35.720 "uuid": "e1d46fb5-d236-462b-aaee-a60743cc7c73", 00:34:35.720 "assigned_rate_limits": { 00:34:35.720 "rw_ios_per_sec": 0, 00:34:35.720 "rw_mbytes_per_sec": 0, 00:34:35.720 "r_mbytes_per_sec": 0, 00:34:35.720 "w_mbytes_per_sec": 0 00:34:35.720 }, 00:34:35.720 "claimed": false, 00:34:35.720 "zoned": false, 00:34:35.720 "supported_io_types": { 00:34:35.720 "read": true, 00:34:35.720 "write": true, 00:34:35.720 "unmap": true, 00:34:35.720 "flush": false, 00:34:35.720 "reset": true, 00:34:35.720 "nvme_admin": false, 00:34:35.720 "nvme_io": false, 00:34:35.720 "nvme_io_md": false, 00:34:35.720 "write_zeroes": true, 00:34:35.720 "zcopy": false, 00:34:35.720 "get_zone_info": false, 00:34:35.720 "zone_management": false, 00:34:35.720 "zone_append": false, 00:34:35.720 "compare": false, 00:34:35.720 "compare_and_write": false, 00:34:35.720 "abort": false, 00:34:35.720 "seek_hole": true, 00:34:35.720 "seek_data": true, 00:34:35.720 "copy": false, 00:34:35.720 "nvme_iov_md": false 00:34:35.720 }, 00:34:35.720 "driver_specific": { 00:34:35.720 "lvol": { 00:34:35.720 "lvol_store_uuid": "c8e39754-65c8-453a-82be-fa11f7752cb9", 00:34:35.720 "base_bdev": "Nvme0n1", 00:34:35.720 "thin_provision": true, 00:34:35.720 "num_allocated_clusters": 0, 00:34:35.720 "snapshot": false, 00:34:35.720 "clone": false, 00:34:35.720 "esnap_clone": false 00:34:35.720 } 00:34:35.720 } 00:34:35.720 } 00:34:35.720 ] 00:34:35.720 02:40:25 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:35.720 02:40:25 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:35.720 02:40:25 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:35.720 [2024-07-11 02:40:26.130318] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:35.720 COMP_lvs0/lv0 00:34:35.978 02:40:26 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:35.978 02:40:26 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:34:35.978 02:40:26 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:35.978 02:40:26 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:35.978 02:40:26 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:35.978 02:40:26 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:35.978 02:40:26 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:35.978 02:40:26 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:36.237 [ 00:34:36.237 { 00:34:36.237 "name": "COMP_lvs0/lv0", 00:34:36.237 "aliases": [ 00:34:36.237 "02b75607-e0a5-599b-a4d4-0978a9414427" 00:34:36.237 ], 00:34:36.237 "product_name": "compress", 00:34:36.237 "block_size": 512, 00:34:36.237 "num_blocks": 200704, 00:34:36.237 "uuid": "02b75607-e0a5-599b-a4d4-0978a9414427", 00:34:36.237 "assigned_rate_limits": { 00:34:36.237 "rw_ios_per_sec": 0, 00:34:36.237 "rw_mbytes_per_sec": 0, 00:34:36.237 "r_mbytes_per_sec": 0, 00:34:36.237 "w_mbytes_per_sec": 0 00:34:36.237 }, 00:34:36.237 "claimed": false, 00:34:36.237 "zoned": false, 00:34:36.237 "supported_io_types": { 00:34:36.237 "read": true, 00:34:36.237 "write": true, 00:34:36.237 "unmap": false, 00:34:36.237 "flush": false, 00:34:36.237 "reset": false, 00:34:36.237 "nvme_admin": false, 00:34:36.237 "nvme_io": false, 00:34:36.237 "nvme_io_md": false, 00:34:36.237 "write_zeroes": true, 00:34:36.237 "zcopy": false, 00:34:36.237 "get_zone_info": false, 00:34:36.237 "zone_management": false, 00:34:36.237 "zone_append": false, 00:34:36.237 "compare": false, 00:34:36.237 "compare_and_write": false, 00:34:36.237 "abort": false, 00:34:36.237 "seek_hole": false, 00:34:36.237 "seek_data": false, 00:34:36.237 "copy": false, 00:34:36.238 "nvme_iov_md": false 00:34:36.238 }, 00:34:36.238 "driver_specific": { 00:34:36.238 "compress": { 00:34:36.238 "name": "COMP_lvs0/lv0", 00:34:36.238 "base_bdev_name": "e1d46fb5-d236-462b-aaee-a60743cc7c73" 00:34:36.238 } 00:34:36.238 } 00:34:36.238 } 00:34:36.238 ] 00:34:36.238 02:40:26 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:36.238 02:40:26 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:36.496 [2024-07-11 02:40:26.676841] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f5dac1b15c0 PMD being used: compress_qat 00:34:36.496 [2024-07-11 02:40:26.680118] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x29f87e0 PMD being used: compress_qat 00:34:36.496 Running I/O for 30 seconds... 00:35:08.688 00:35:08.688 Latency(us) 00:35:08.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.688 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:35:08.688 Verification LBA range: start 0x0 length 0xc40 00:35:08.688 COMP_lvs0/lv0 : 30.02 488.10 7.63 0.00 0.00 130680.02 4843.97 118534.68 00:35:08.689 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:35:08.689 Verification LBA range: start 0xc40 length 0xc40 00:35:08.689 COMP_lvs0/lv0 : 30.02 1946.66 30.42 0.00 0.00 32568.09 2208.28 73400.32 00:35:08.689 =================================================================================================================== 00:35:08.689 Total : 2434.76 38.04 0.00 0.00 52241.22 2208.28 118534.68 00:35:08.689 0 00:35:08.689 02:40:56 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:08.689 02:40:56 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:08.689 02:40:57 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:08.689 02:40:57 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:08.689 02:40:57 compress_compdev -- compress/compress.sh@78 -- # killprocess 2077427 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2077427 ']' 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2077427 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2077427 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2077427' 00:35:08.689 killing process with pid 2077427 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@967 -- # kill 2077427 00:35:08.689 Received shutdown signal, test time was about 30.000000 seconds 00:35:08.689 00:35:08.689 Latency(us) 00:35:08.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.689 =================================================================================================================== 00:35:08.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.689 02:40:57 compress_compdev -- common/autotest_common.sh@972 -- # wait 2077427 00:35:11.225 02:41:01 compress_compdev -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:35:11.225 02:41:01 compress_compdev -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:35:11.225 02:41:01 compress_compdev -- compress/compress.sh@96 -- # NET_TYPE=virt 00:35:11.225 02:41:01 compress_compdev -- compress/compress.sh@96 -- # nvmftestinit 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.225 02:41:01 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:11.225 02:41:01 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:11.225 Cannot find device "nvmf_init_br" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@154 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:11.225 Cannot find device "nvmf_tgt_br" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@155 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:11.225 Cannot find device "nvmf_tgt_br2" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@156 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:11.225 Cannot find device "nvmf_init_br" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@157 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:11.225 Cannot find device "nvmf_tgt_br" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@158 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:11.225 Cannot find device "nvmf_tgt_br2" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@159 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:11.225 Cannot find device "nvmf_br" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@160 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:11.225 Cannot find device "nvmf_init_if" 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@161 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:11.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@162 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:11.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@163 -- # true 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:11.225 02:41:01 compress_compdev -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:11.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:35:11.485 00:35:11.485 --- 10.0.0.2 ping statistics --- 00:35:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.485 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:11.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:11.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:35:11.485 00:35:11.485 --- 10.0.0.3 ping statistics --- 00:35:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.485 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:11.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:35:11.485 00:35:11.485 --- 10.0.0.1 ping statistics --- 00:35:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.485 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@433 -- # return 0 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:11.485 02:41:01 compress_compdev -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@481 -- # nvmfpid=2083097 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@482 -- # waitforlisten 2083097 00:35:11.485 02:41:01 compress_compdev -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2083097 ']' 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:11.485 02:41:01 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:11.485 [2024-07-11 02:41:01.882592] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:35:11.485 [2024-07-11 02:41:01.882665] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.745 [2024-07-11 02:41:02.031154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:11.745 [2024-07-11 02:41:02.085635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.745 [2024-07-11 02:41:02.085686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.745 [2024-07-11 02:41:02.085700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.745 [2024-07-11 02:41:02.085713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.745 [2024-07-11 02:41:02.085723] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.745 [2024-07-11 02:41:02.085791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.745 [2024-07-11 02:41:02.085891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:11.745 [2024-07-11 02:41:02.085893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.682 02:41:02 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:12.682 02:41:02 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:35:12.682 02:41:02 compress_compdev -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:12.682 02:41:02 compress_compdev -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:12.682 02:41:02 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:12.682 02:41:02 compress_compdev -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.682 02:41:02 compress_compdev -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:12.682 02:41:02 compress_compdev -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:35:12.682 [2024-07-11 02:41:03.036025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.682 02:41:03 compress_compdev -- compress/compress.sh@102 -- # create_vols 00:35:12.682 02:41:03 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:12.682 02:41:03 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:15.966 02:41:06 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:15.966 02:41:06 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:35:15.966 02:41:06 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:15.966 02:41:06 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:15.966 02:41:06 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:15.966 02:41:06 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:15.966 02:41:06 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:16.226 02:41:06 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:16.485 [ 00:35:16.485 { 00:35:16.485 "name": "Nvme0n1", 00:35:16.485 "aliases": [ 00:35:16.485 "a91c0177-2aac-4356-975b-f0edac60b9ef" 00:35:16.485 ], 00:35:16.485 "product_name": "NVMe disk", 00:35:16.485 "block_size": 512, 00:35:16.485 "num_blocks": 7814037168, 00:35:16.485 "uuid": "a91c0177-2aac-4356-975b-f0edac60b9ef", 00:35:16.485 "assigned_rate_limits": { 00:35:16.485 "rw_ios_per_sec": 0, 00:35:16.485 "rw_mbytes_per_sec": 0, 00:35:16.485 "r_mbytes_per_sec": 0, 00:35:16.485 "w_mbytes_per_sec": 0 00:35:16.485 }, 00:35:16.485 "claimed": false, 00:35:16.485 "zoned": false, 00:35:16.485 "supported_io_types": { 00:35:16.485 "read": true, 00:35:16.485 "write": true, 00:35:16.485 "unmap": true, 00:35:16.485 "flush": true, 00:35:16.485 "reset": true, 00:35:16.485 "nvme_admin": true, 00:35:16.485 "nvme_io": true, 00:35:16.485 "nvme_io_md": false, 00:35:16.485 "write_zeroes": true, 00:35:16.485 "zcopy": false, 00:35:16.485 "get_zone_info": false, 00:35:16.485 "zone_management": false, 00:35:16.485 "zone_append": false, 00:35:16.485 "compare": false, 00:35:16.485 "compare_and_write": false, 00:35:16.485 "abort": true, 00:35:16.485 "seek_hole": false, 00:35:16.485 "seek_data": false, 00:35:16.485 "copy": false, 00:35:16.485 "nvme_iov_md": false 00:35:16.485 }, 00:35:16.485 "driver_specific": { 00:35:16.485 "nvme": [ 00:35:16.485 { 00:35:16.485 "pci_address": "0000:1a:00.0", 00:35:16.485 "trid": { 00:35:16.485 "trtype": "PCIe", 00:35:16.485 "traddr": "0000:1a:00.0" 00:35:16.485 }, 00:35:16.485 "ctrlr_data": { 00:35:16.485 "cntlid": 0, 00:35:16.485 "vendor_id": "0x8086", 00:35:16.485 "model_number": "INTEL SSDPE2KX040T8", 00:35:16.485 "serial_number": "BTLJ8303085V4P0DGN", 00:35:16.485 "firmware_revision": "VDV10170", 00:35:16.485 "oacs": { 00:35:16.485 "security": 0, 00:35:16.485 "format": 1, 00:35:16.485 "firmware": 1, 00:35:16.485 "ns_manage": 1 00:35:16.485 }, 00:35:16.485 "multi_ctrlr": false, 00:35:16.485 "ana_reporting": false 00:35:16.485 }, 00:35:16.485 "vs": { 00:35:16.485 "nvme_version": "1.2" 00:35:16.485 }, 00:35:16.485 "ns_data": { 00:35:16.485 "id": 1, 00:35:16.485 "can_share": false 00:35:16.485 } 00:35:16.485 } 00:35:16.485 ], 00:35:16.485 "mp_policy": "active_passive" 00:35:16.485 } 00:35:16.485 } 00:35:16.485 ] 00:35:16.485 02:41:06 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:16.485 02:41:06 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:19.018 13fd1831-89a6-4585-943b-e70f1a58e890 00:35:19.018 02:41:08 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:19.018 74747d38-5820-4af0-b45f-e64917837e51 00:35:19.018 02:41:09 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:19.018 02:41:09 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:35:19.018 02:41:09 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:19.018 02:41:09 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:19.018 02:41:09 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:19.018 02:41:09 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:19.018 02:41:09 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:19.018 02:41:09 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:19.277 [ 00:35:19.277 { 00:35:19.277 "name": "74747d38-5820-4af0-b45f-e64917837e51", 00:35:19.277 "aliases": [ 00:35:19.277 "lvs0/lv0" 00:35:19.277 ], 00:35:19.277 "product_name": "Logical Volume", 00:35:19.277 "block_size": 512, 00:35:19.277 "num_blocks": 204800, 00:35:19.277 "uuid": "74747d38-5820-4af0-b45f-e64917837e51", 00:35:19.277 "assigned_rate_limits": { 00:35:19.277 "rw_ios_per_sec": 0, 00:35:19.277 "rw_mbytes_per_sec": 0, 00:35:19.277 "r_mbytes_per_sec": 0, 00:35:19.277 "w_mbytes_per_sec": 0 00:35:19.277 }, 00:35:19.277 "claimed": false, 00:35:19.277 "zoned": false, 00:35:19.277 "supported_io_types": { 00:35:19.277 "read": true, 00:35:19.277 "write": true, 00:35:19.277 "unmap": true, 00:35:19.277 "flush": false, 00:35:19.277 "reset": true, 00:35:19.277 "nvme_admin": false, 00:35:19.277 "nvme_io": false, 00:35:19.277 "nvme_io_md": false, 00:35:19.277 "write_zeroes": true, 00:35:19.277 "zcopy": false, 00:35:19.277 "get_zone_info": false, 00:35:19.277 "zone_management": false, 00:35:19.277 "zone_append": false, 00:35:19.277 "compare": false, 00:35:19.277 "compare_and_write": false, 00:35:19.277 "abort": false, 00:35:19.277 "seek_hole": true, 00:35:19.277 "seek_data": true, 00:35:19.277 "copy": false, 00:35:19.277 "nvme_iov_md": false 00:35:19.277 }, 00:35:19.277 "driver_specific": { 00:35:19.277 "lvol": { 00:35:19.277 "lvol_store_uuid": "13fd1831-89a6-4585-943b-e70f1a58e890", 00:35:19.277 "base_bdev": "Nvme0n1", 00:35:19.277 "thin_provision": true, 00:35:19.277 "num_allocated_clusters": 0, 00:35:19.277 "snapshot": false, 00:35:19.277 "clone": false, 00:35:19.277 "esnap_clone": false 00:35:19.277 } 00:35:19.277 } 00:35:19.277 } 00:35:19.277 ] 00:35:19.277 02:41:09 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:19.277 02:41:09 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:35:19.277 02:41:09 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:35:19.536 [2024-07-11 02:41:09.889715] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:19.536 COMP_lvs0/lv0 00:35:19.536 02:41:09 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:19.536 02:41:09 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:35:19.536 02:41:09 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:19.536 02:41:09 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:19.536 02:41:09 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:19.536 02:41:09 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:19.536 02:41:09 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:19.794 02:41:10 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:20.054 [ 00:35:20.054 { 00:35:20.054 "name": "COMP_lvs0/lv0", 00:35:20.054 "aliases": [ 00:35:20.054 "1761ad55-1083-529f-8eec-662173237931" 00:35:20.054 ], 00:35:20.054 "product_name": "compress", 00:35:20.054 "block_size": 512, 00:35:20.054 "num_blocks": 200704, 00:35:20.054 "uuid": "1761ad55-1083-529f-8eec-662173237931", 00:35:20.054 "assigned_rate_limits": { 00:35:20.054 "rw_ios_per_sec": 0, 00:35:20.054 "rw_mbytes_per_sec": 0, 00:35:20.054 "r_mbytes_per_sec": 0, 00:35:20.054 "w_mbytes_per_sec": 0 00:35:20.054 }, 00:35:20.054 "claimed": false, 00:35:20.054 "zoned": false, 00:35:20.054 "supported_io_types": { 00:35:20.054 "read": true, 00:35:20.054 "write": true, 00:35:20.054 "unmap": false, 00:35:20.054 "flush": false, 00:35:20.054 "reset": false, 00:35:20.054 "nvme_admin": false, 00:35:20.054 "nvme_io": false, 00:35:20.054 "nvme_io_md": false, 00:35:20.054 "write_zeroes": true, 00:35:20.054 "zcopy": false, 00:35:20.054 "get_zone_info": false, 00:35:20.054 "zone_management": false, 00:35:20.054 "zone_append": false, 00:35:20.054 "compare": false, 00:35:20.054 "compare_and_write": false, 00:35:20.054 "abort": false, 00:35:20.054 "seek_hole": false, 00:35:20.054 "seek_data": false, 00:35:20.054 "copy": false, 00:35:20.054 "nvme_iov_md": false 00:35:20.054 }, 00:35:20.054 "driver_specific": { 00:35:20.054 "compress": { 00:35:20.054 "name": "COMP_lvs0/lv0", 00:35:20.054 "base_bdev_name": "74747d38-5820-4af0-b45f-e64917837e51" 00:35:20.054 } 00:35:20.054 } 00:35:20.054 } 00:35:20.054 ] 00:35:20.054 02:41:10 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:20.054 02:41:10 compress_compdev -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:20.314 02:41:10 compress_compdev -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:35:20.573 02:41:10 compress_compdev -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:20.832 [2024-07-11 02:41:11.219662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.832 02:41:11 compress_compdev -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:35:20.832 02:41:11 compress_compdev -- compress/compress.sh@109 -- # perf_pid=2084224 00:35:20.832 02:41:11 compress_compdev -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:20.832 02:41:11 compress_compdev -- compress/compress.sh@113 -- # wait 2084224 00:35:21.401 [2024-07-11 02:41:11.519664] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:53.481 Initializing NVMe Controllers 00:35:53.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:53.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:53.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:53.481 Initialization complete. Launching workers. 00:35:53.481 ======================================================== 00:35:53.481 Latency(us) 00:35:53.481 Device Information : IOPS MiB/s Average min max 00:35:53.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 3754.57 14.67 17048.47 2080.81 40855.93 00:35:53.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2337.27 9.13 27389.86 2102.45 49192.29 00:35:53.481 ======================================================== 00:35:53.481 Total : 6091.83 23.80 21016.17 2080.81 49192.29 00:35:53.481 00:35:53.481 02:41:41 compress_compdev -- compress/compress.sh@114 -- # destroy_vols 00:35:53.481 02:41:41 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:53.481 02:41:41 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:53.481 02:41:42 compress_compdev -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:35:53.481 02:41:42 compress_compdev -- compress/compress.sh@117 -- # nvmftestfini 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@117 -- # sync 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@120 -- # set +e 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:53.481 rmmod nvme_tcp 00:35:53.481 rmmod nvme_fabrics 00:35:53.481 rmmod nvme_keyring 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@124 -- # set -e 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@125 -- # return 0 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@489 -- # '[' -n 2083097 ']' 00:35:53.481 02:41:42 compress_compdev -- nvmf/common.sh@490 -- # killprocess 2083097 00:35:53.481 02:41:42 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2083097 ']' 00:35:53.481 02:41:42 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2083097 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2083097 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2083097' 00:35:53.482 killing process with pid 2083097 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@967 -- # kill 2083097 00:35:53.482 02:41:42 compress_compdev -- common/autotest_common.sh@972 -- # wait 2083097 00:35:56.016 02:41:46 compress_compdev -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:56.016 02:41:46 compress_compdev -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:56.016 02:41:46 compress_compdev -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:56.016 02:41:46 compress_compdev -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:56.016 02:41:46 compress_compdev -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:56.016 02:41:46 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.016 02:41:46 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:56.016 02:41:46 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.016 02:41:46 compress_compdev -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:56.016 02:41:46 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:35:56.016 00:35:56.016 real 2m28.380s 00:35:56.016 user 6m41.494s 00:35:56.016 sys 0m22.740s 00:35:56.016 02:41:46 compress_compdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:56.016 02:41:46 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:56.016 ************************************ 00:35:56.016 END TEST compress_compdev 00:35:56.016 ************************************ 00:35:56.016 02:41:46 -- common/autotest_common.sh@1142 -- # return 0 00:35:56.016 02:41:46 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:35:56.016 02:41:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:56.016 02:41:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:56.016 02:41:46 -- common/autotest_common.sh@10 -- # set +x 00:35:56.016 ************************************ 00:35:56.016 START TEST compress_isal 00:35:56.016 ************************************ 00:35:56.016 02:41:46 compress_isal -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:35:56.016 * Looking for test storage... 00:35:56.016 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:35:56.016 02:41:46 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:35:56.016 02:41:46 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.016 02:41:46 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.016 02:41:46 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.016 02:41:46 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.016 02:41:46 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.016 02:41:46 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.016 02:41:46 compress_isal -- paths/export.sh@5 -- # export PATH 00:35:56.016 02:41:46 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@47 -- # : 0 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:56.016 02:41:46 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:56.016 02:41:46 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:35:56.016 02:41:46 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:35:56.275 02:41:46 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:35:56.275 02:41:46 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:35:56.275 02:41:46 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:35:56.275 02:41:46 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2088782 00:35:56.275 02:41:46 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:56.275 02:41:46 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2088782 00:35:56.275 02:41:46 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:35:56.275 02:41:46 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2088782 ']' 00:35:56.275 02:41:46 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.275 02:41:46 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:56.275 02:41:46 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.275 02:41:46 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:56.275 02:41:46 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:35:56.275 [2024-07-11 02:41:46.501869] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:35:56.275 [2024-07-11 02:41:46.501939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088782 ] 00:35:56.275 [2024-07-11 02:41:46.645374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:56.534 [2024-07-11 02:41:46.700672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:56.534 [2024-07-11 02:41:46.700677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.102 02:41:47 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:57.102 02:41:47 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:35:57.102 02:41:47 compress_isal -- compress/compress.sh@74 -- # create_vols 00:35:57.102 02:41:47 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:57.102 02:41:47 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:00.393 02:41:50 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:36:00.393 02:41:50 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:36:00.393 02:41:50 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:00.393 02:41:50 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:00.393 02:41:50 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:00.393 02:41:50 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:00.393 02:41:50 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:00.662 02:41:50 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:36:00.662 [ 00:36:00.662 { 00:36:00.662 "name": "Nvme0n1", 00:36:00.662 "aliases": [ 00:36:00.662 "7b51a99a-e894-4caa-a02d-4854888fb821" 00:36:00.662 ], 00:36:00.662 "product_name": "NVMe disk", 00:36:00.662 "block_size": 512, 00:36:00.662 "num_blocks": 7814037168, 00:36:00.662 "uuid": "7b51a99a-e894-4caa-a02d-4854888fb821", 00:36:00.662 "assigned_rate_limits": { 00:36:00.662 "rw_ios_per_sec": 0, 00:36:00.662 "rw_mbytes_per_sec": 0, 00:36:00.662 "r_mbytes_per_sec": 0, 00:36:00.662 "w_mbytes_per_sec": 0 00:36:00.662 }, 00:36:00.662 "claimed": false, 00:36:00.662 "zoned": false, 00:36:00.662 "supported_io_types": { 00:36:00.662 "read": true, 00:36:00.662 "write": true, 00:36:00.662 "unmap": true, 00:36:00.662 "flush": true, 00:36:00.662 "reset": true, 00:36:00.662 "nvme_admin": true, 00:36:00.662 "nvme_io": true, 00:36:00.662 "nvme_io_md": false, 00:36:00.663 "write_zeroes": true, 00:36:00.663 "zcopy": false, 00:36:00.663 "get_zone_info": false, 00:36:00.663 "zone_management": false, 00:36:00.663 "zone_append": false, 00:36:00.663 "compare": false, 00:36:00.663 "compare_and_write": false, 00:36:00.663 "abort": true, 00:36:00.663 "seek_hole": false, 00:36:00.663 "seek_data": false, 00:36:00.663 "copy": false, 00:36:00.663 "nvme_iov_md": false 00:36:00.663 }, 00:36:00.663 "driver_specific": { 00:36:00.663 "nvme": [ 00:36:00.663 { 00:36:00.663 "pci_address": "0000:1a:00.0", 00:36:00.663 "trid": { 00:36:00.663 "trtype": "PCIe", 00:36:00.663 "traddr": "0000:1a:00.0" 00:36:00.663 }, 00:36:00.663 "ctrlr_data": { 00:36:00.663 "cntlid": 0, 00:36:00.663 "vendor_id": "0x8086", 00:36:00.663 "model_number": "INTEL SSDPE2KX040T8", 00:36:00.663 "serial_number": "BTLJ8303085V4P0DGN", 00:36:00.663 "firmware_revision": "VDV10170", 00:36:00.663 "oacs": { 00:36:00.663 "security": 0, 00:36:00.663 "format": 1, 00:36:00.663 "firmware": 1, 00:36:00.663 "ns_manage": 1 00:36:00.663 }, 00:36:00.663 "multi_ctrlr": false, 00:36:00.663 "ana_reporting": false 00:36:00.663 }, 00:36:00.663 "vs": { 00:36:00.663 "nvme_version": "1.2" 00:36:00.663 }, 00:36:00.664 "ns_data": { 00:36:00.664 "id": 1, 00:36:00.664 "can_share": false 00:36:00.664 } 00:36:00.664 } 00:36:00.664 ], 00:36:00.664 "mp_policy": "active_passive" 00:36:00.664 } 00:36:00.664 } 00:36:00.664 ] 00:36:00.664 02:41:51 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:00.664 02:41:51 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:36:03.284 a22ceae6-940b-4a9f-b88d-10cc000a6fbb 00:36:03.284 02:41:53 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:03.543 8307aad3-a843-4647-adfd-5428a5bf7c55 00:36:03.543 02:41:53 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:03.543 02:41:53 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:36:03.543 02:41:53 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:03.543 02:41:53 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:03.543 02:41:53 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:03.543 02:41:53 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:03.543 02:41:53 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:03.802 02:41:54 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:03.802 [ 00:36:03.802 { 00:36:03.802 "name": "8307aad3-a843-4647-adfd-5428a5bf7c55", 00:36:03.802 "aliases": [ 00:36:03.802 "lvs0/lv0" 00:36:03.802 ], 00:36:03.802 "product_name": "Logical Volume", 00:36:03.802 "block_size": 512, 00:36:03.802 "num_blocks": 204800, 00:36:03.802 "uuid": "8307aad3-a843-4647-adfd-5428a5bf7c55", 00:36:03.802 "assigned_rate_limits": { 00:36:03.802 "rw_ios_per_sec": 0, 00:36:03.802 "rw_mbytes_per_sec": 0, 00:36:03.802 "r_mbytes_per_sec": 0, 00:36:03.802 "w_mbytes_per_sec": 0 00:36:03.802 }, 00:36:03.802 "claimed": false, 00:36:03.802 "zoned": false, 00:36:03.802 "supported_io_types": { 00:36:03.802 "read": true, 00:36:03.802 "write": true, 00:36:03.802 "unmap": true, 00:36:03.802 "flush": false, 00:36:03.802 "reset": true, 00:36:03.802 "nvme_admin": false, 00:36:03.802 "nvme_io": false, 00:36:03.802 "nvme_io_md": false, 00:36:03.802 "write_zeroes": true, 00:36:03.802 "zcopy": false, 00:36:03.802 "get_zone_info": false, 00:36:03.802 "zone_management": false, 00:36:03.802 "zone_append": false, 00:36:03.802 "compare": false, 00:36:03.803 "compare_and_write": false, 00:36:03.803 "abort": false, 00:36:03.803 "seek_hole": true, 00:36:03.803 "seek_data": true, 00:36:03.803 "copy": false, 00:36:03.803 "nvme_iov_md": false 00:36:03.803 }, 00:36:03.803 "driver_specific": { 00:36:03.803 "lvol": { 00:36:03.803 "lvol_store_uuid": "a22ceae6-940b-4a9f-b88d-10cc000a6fbb", 00:36:03.803 "base_bdev": "Nvme0n1", 00:36:03.803 "thin_provision": true, 00:36:03.803 "num_allocated_clusters": 0, 00:36:03.803 "snapshot": false, 00:36:03.803 "clone": false, 00:36:03.803 "esnap_clone": false 00:36:03.803 } 00:36:03.803 } 00:36:03.803 } 00:36:03.803 ] 00:36:03.803 02:41:54 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:03.803 02:41:54 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:36:03.803 02:41:54 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:36:04.062 [2024-07-11 02:41:54.450327] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:04.062 COMP_lvs0/lv0 00:36:04.062 02:41:54 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:04.062 02:41:54 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:36:04.062 02:41:54 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:04.062 02:41:54 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:04.062 02:41:54 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:04.062 02:41:54 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:04.062 02:41:54 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:04.321 02:41:54 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:04.579 [ 00:36:04.579 { 00:36:04.579 "name": "COMP_lvs0/lv0", 00:36:04.579 "aliases": [ 00:36:04.579 "fab01d12-95af-5143-a122-d0683f099e61" 00:36:04.579 ], 00:36:04.579 "product_name": "compress", 00:36:04.579 "block_size": 512, 00:36:04.579 "num_blocks": 200704, 00:36:04.579 "uuid": "fab01d12-95af-5143-a122-d0683f099e61", 00:36:04.579 "assigned_rate_limits": { 00:36:04.579 "rw_ios_per_sec": 0, 00:36:04.579 "rw_mbytes_per_sec": 0, 00:36:04.579 "r_mbytes_per_sec": 0, 00:36:04.579 "w_mbytes_per_sec": 0 00:36:04.579 }, 00:36:04.579 "claimed": false, 00:36:04.579 "zoned": false, 00:36:04.579 "supported_io_types": { 00:36:04.579 "read": true, 00:36:04.579 "write": true, 00:36:04.579 "unmap": false, 00:36:04.579 "flush": false, 00:36:04.579 "reset": false, 00:36:04.579 "nvme_admin": false, 00:36:04.579 "nvme_io": false, 00:36:04.579 "nvme_io_md": false, 00:36:04.579 "write_zeroes": true, 00:36:04.579 "zcopy": false, 00:36:04.579 "get_zone_info": false, 00:36:04.580 "zone_management": false, 00:36:04.580 "zone_append": false, 00:36:04.580 "compare": false, 00:36:04.580 "compare_and_write": false, 00:36:04.580 "abort": false, 00:36:04.580 "seek_hole": false, 00:36:04.580 "seek_data": false, 00:36:04.580 "copy": false, 00:36:04.580 "nvme_iov_md": false 00:36:04.580 }, 00:36:04.580 "driver_specific": { 00:36:04.580 "compress": { 00:36:04.580 "name": "COMP_lvs0/lv0", 00:36:04.580 "base_bdev_name": "8307aad3-a843-4647-adfd-5428a5bf7c55" 00:36:04.580 } 00:36:04.580 } 00:36:04.580 } 00:36:04.580 ] 00:36:04.580 02:41:54 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:04.580 02:41:54 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:04.839 Running I/O for 3 seconds... 00:36:08.129 00:36:08.129 Latency(us) 00:36:08.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.130 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:36:08.130 Verification LBA range: start 0x0 length 0x3100 00:36:08.130 COMP_lvs0/lv0 : 3.02 1262.63 4.93 0.00 0.00 25227.26 386.45 26898.25 00:36:08.130 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:36:08.130 Verification LBA range: start 0x3100 length 0x3100 00:36:08.130 COMP_lvs0/lv0 : 3.02 1264.50 4.94 0.00 0.00 25163.39 182.54 27810.06 00:36:08.130 =================================================================================================================== 00:36:08.130 Total : 2527.13 9.87 0.00 0.00 25195.30 182.54 27810.06 00:36:08.130 0 00:36:08.130 02:41:58 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:36:08.130 02:41:58 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:36:08.130 02:41:58 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:36:08.698 02:41:58 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:08.698 02:41:58 compress_isal -- compress/compress.sh@78 -- # killprocess 2088782 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2088782 ']' 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2088782 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@953 -- # uname 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2088782 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2088782' 00:36:08.698 killing process with pid 2088782 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@967 -- # kill 2088782 00:36:08.698 Received shutdown signal, test time was about 3.000000 seconds 00:36:08.698 00:36:08.698 Latency(us) 00:36:08.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.698 =================================================================================================================== 00:36:08.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:08.698 02:41:58 compress_isal -- common/autotest_common.sh@972 -- # wait 2088782 00:36:12.891 02:42:02 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:36:12.891 02:42:02 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:36:12.891 02:42:02 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2090890 00:36:12.891 02:42:02 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:12.891 02:42:02 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:36:12.891 02:42:02 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2090890 00:36:12.891 02:42:02 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2090890 ']' 00:36:12.891 02:42:02 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.891 02:42:02 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:12.891 02:42:02 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.891 02:42:02 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:12.891 02:42:02 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:36:12.891 [2024-07-11 02:42:02.843846] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:36:12.891 [2024-07-11 02:42:02.843920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090890 ] 00:36:12.891 [2024-07-11 02:42:03.018656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:12.891 [2024-07-11 02:42:03.115124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:12.891 [2024-07-11 02:42:03.115148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.891 02:42:03 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:12.891 02:42:03 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:36:12.891 02:42:03 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:36:12.891 02:42:03 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:12.891 02:42:03 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:16.180 02:42:06 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:36:16.180 02:42:06 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:36:16.180 02:42:06 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:16.180 02:42:06 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:16.180 02:42:06 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:16.180 02:42:06 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:16.180 02:42:06 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:16.439 02:42:06 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:36:16.698 [ 00:36:16.698 { 00:36:16.698 "name": "Nvme0n1", 00:36:16.698 "aliases": [ 00:36:16.698 "a8378e7f-8981-4643-aaf8-30a30f86e60c" 00:36:16.698 ], 00:36:16.698 "product_name": "NVMe disk", 00:36:16.698 "block_size": 512, 00:36:16.698 "num_blocks": 7814037168, 00:36:16.698 "uuid": "a8378e7f-8981-4643-aaf8-30a30f86e60c", 00:36:16.698 "assigned_rate_limits": { 00:36:16.698 "rw_ios_per_sec": 0, 00:36:16.698 "rw_mbytes_per_sec": 0, 00:36:16.698 "r_mbytes_per_sec": 0, 00:36:16.698 "w_mbytes_per_sec": 0 00:36:16.698 }, 00:36:16.698 "claimed": false, 00:36:16.698 "zoned": false, 00:36:16.698 "supported_io_types": { 00:36:16.698 "read": true, 00:36:16.698 "write": true, 00:36:16.698 "unmap": true, 00:36:16.698 "flush": true, 00:36:16.698 "reset": true, 00:36:16.698 "nvme_admin": true, 00:36:16.698 "nvme_io": true, 00:36:16.698 "nvme_io_md": false, 00:36:16.698 "write_zeroes": true, 00:36:16.698 "zcopy": false, 00:36:16.698 "get_zone_info": false, 00:36:16.698 "zone_management": false, 00:36:16.699 "zone_append": false, 00:36:16.699 "compare": false, 00:36:16.699 "compare_and_write": false, 00:36:16.699 "abort": true, 00:36:16.699 "seek_hole": false, 00:36:16.699 "seek_data": false, 00:36:16.699 "copy": false, 00:36:16.699 "nvme_iov_md": false 00:36:16.699 }, 00:36:16.699 "driver_specific": { 00:36:16.699 "nvme": [ 00:36:16.699 { 00:36:16.699 "pci_address": "0000:1a:00.0", 00:36:16.699 "trid": { 00:36:16.699 "trtype": "PCIe", 00:36:16.699 "traddr": "0000:1a:00.0" 00:36:16.699 }, 00:36:16.699 "ctrlr_data": { 00:36:16.699 "cntlid": 0, 00:36:16.699 "vendor_id": "0x8086", 00:36:16.699 "model_number": "INTEL SSDPE2KX040T8", 00:36:16.699 "serial_number": "BTLJ8303085V4P0DGN", 00:36:16.699 "firmware_revision": "VDV10170", 00:36:16.699 "oacs": { 00:36:16.699 "security": 0, 00:36:16.699 "format": 1, 00:36:16.699 "firmware": 1, 00:36:16.699 "ns_manage": 1 00:36:16.699 }, 00:36:16.699 "multi_ctrlr": false, 00:36:16.699 "ana_reporting": false 00:36:16.699 }, 00:36:16.699 "vs": { 00:36:16.699 "nvme_version": "1.2" 00:36:16.699 }, 00:36:16.699 "ns_data": { 00:36:16.699 "id": 1, 00:36:16.699 "can_share": false 00:36:16.699 } 00:36:16.699 } 00:36:16.699 ], 00:36:16.699 "mp_policy": "active_passive" 00:36:16.699 } 00:36:16.699 } 00:36:16.699 ] 00:36:16.699 02:42:06 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:16.699 02:42:06 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:36:18.604 a7bba17f-67a9-4bb8-91e3-a7bd0c5357d0 00:36:18.604 02:42:09 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:18.863 57200144-e960-4a7d-93b4-e64906a216b0 00:36:18.863 02:42:09 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:18.863 02:42:09 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:36:18.863 02:42:09 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:18.863 02:42:09 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:18.863 02:42:09 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:18.863 02:42:09 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:18.863 02:42:09 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:19.122 02:42:09 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:19.387 [ 00:36:19.387 { 00:36:19.387 "name": "57200144-e960-4a7d-93b4-e64906a216b0", 00:36:19.387 "aliases": [ 00:36:19.387 "lvs0/lv0" 00:36:19.387 ], 00:36:19.387 "product_name": "Logical Volume", 00:36:19.387 "block_size": 512, 00:36:19.387 "num_blocks": 204800, 00:36:19.387 "uuid": "57200144-e960-4a7d-93b4-e64906a216b0", 00:36:19.387 "assigned_rate_limits": { 00:36:19.387 "rw_ios_per_sec": 0, 00:36:19.387 "rw_mbytes_per_sec": 0, 00:36:19.387 "r_mbytes_per_sec": 0, 00:36:19.387 "w_mbytes_per_sec": 0 00:36:19.387 }, 00:36:19.387 "claimed": false, 00:36:19.387 "zoned": false, 00:36:19.387 "supported_io_types": { 00:36:19.387 "read": true, 00:36:19.387 "write": true, 00:36:19.387 "unmap": true, 00:36:19.387 "flush": false, 00:36:19.387 "reset": true, 00:36:19.387 "nvme_admin": false, 00:36:19.387 "nvme_io": false, 00:36:19.387 "nvme_io_md": false, 00:36:19.387 "write_zeroes": true, 00:36:19.387 "zcopy": false, 00:36:19.387 "get_zone_info": false, 00:36:19.387 "zone_management": false, 00:36:19.387 "zone_append": false, 00:36:19.387 "compare": false, 00:36:19.387 "compare_and_write": false, 00:36:19.387 "abort": false, 00:36:19.387 "seek_hole": true, 00:36:19.387 "seek_data": true, 00:36:19.387 "copy": false, 00:36:19.387 "nvme_iov_md": false 00:36:19.387 }, 00:36:19.387 "driver_specific": { 00:36:19.387 "lvol": { 00:36:19.387 "lvol_store_uuid": "a7bba17f-67a9-4bb8-91e3-a7bd0c5357d0", 00:36:19.387 "base_bdev": "Nvme0n1", 00:36:19.387 "thin_provision": true, 00:36:19.387 "num_allocated_clusters": 0, 00:36:19.387 "snapshot": false, 00:36:19.387 "clone": false, 00:36:19.387 "esnap_clone": false 00:36:19.387 } 00:36:19.387 } 00:36:19.387 } 00:36:19.387 ] 00:36:19.387 02:42:09 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:19.387 02:42:09 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:36:19.387 02:42:09 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:36:19.647 [2024-07-11 02:42:09.970454] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:19.647 COMP_lvs0/lv0 00:36:19.647 02:42:09 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:19.647 02:42:09 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:36:19.647 02:42:09 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:19.647 02:42:09 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:19.647 02:42:09 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:19.647 02:42:09 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:19.647 02:42:09 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:19.905 02:42:10 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:20.473 [ 00:36:20.473 { 00:36:20.473 "name": "COMP_lvs0/lv0", 00:36:20.473 "aliases": [ 00:36:20.473 "b766c538-eee0-51e7-ad8a-734bac874e37" 00:36:20.473 ], 00:36:20.473 "product_name": "compress", 00:36:20.473 "block_size": 512, 00:36:20.473 "num_blocks": 200704, 00:36:20.473 "uuid": "b766c538-eee0-51e7-ad8a-734bac874e37", 00:36:20.473 "assigned_rate_limits": { 00:36:20.473 "rw_ios_per_sec": 0, 00:36:20.473 "rw_mbytes_per_sec": 0, 00:36:20.473 "r_mbytes_per_sec": 0, 00:36:20.473 "w_mbytes_per_sec": 0 00:36:20.473 }, 00:36:20.473 "claimed": false, 00:36:20.473 "zoned": false, 00:36:20.473 "supported_io_types": { 00:36:20.473 "read": true, 00:36:20.473 "write": true, 00:36:20.473 "unmap": false, 00:36:20.473 "flush": false, 00:36:20.473 "reset": false, 00:36:20.473 "nvme_admin": false, 00:36:20.473 "nvme_io": false, 00:36:20.473 "nvme_io_md": false, 00:36:20.473 "write_zeroes": true, 00:36:20.473 "zcopy": false, 00:36:20.473 "get_zone_info": false, 00:36:20.473 "zone_management": false, 00:36:20.473 "zone_append": false, 00:36:20.473 "compare": false, 00:36:20.473 "compare_and_write": false, 00:36:20.473 "abort": false, 00:36:20.473 "seek_hole": false, 00:36:20.473 "seek_data": false, 00:36:20.473 "copy": false, 00:36:20.473 "nvme_iov_md": false 00:36:20.473 }, 00:36:20.473 "driver_specific": { 00:36:20.473 "compress": { 00:36:20.473 "name": "COMP_lvs0/lv0", 00:36:20.473 "base_bdev_name": "57200144-e960-4a7d-93b4-e64906a216b0" 00:36:20.473 } 00:36:20.473 } 00:36:20.473 } 00:36:20.473 ] 00:36:20.473 02:42:10 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:20.473 02:42:10 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:20.473 Running I/O for 3 seconds... 00:36:23.760 00:36:23.760 Latency(us) 00:36:23.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.760 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:36:23.760 Verification LBA range: start 0x0 length 0x3100 00:36:23.760 COMP_lvs0/lv0 : 3.02 1257.72 4.91 0.00 0.00 25307.23 187.88 27468.13 00:36:23.760 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:36:23.760 Verification LBA range: start 0x3100 length 0x3100 00:36:23.760 COMP_lvs0/lv0 : 3.02 1260.27 4.92 0.00 0.00 25183.18 400.70 28379.94 00:36:23.760 =================================================================================================================== 00:36:23.760 Total : 2517.99 9.84 0.00 0.00 25245.10 187.88 28379.94 00:36:23.760 0 00:36:23.760 02:42:13 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:36:23.760 02:42:13 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:36:24.019 02:42:14 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:36:24.278 02:42:14 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:24.278 02:42:14 compress_isal -- compress/compress.sh@78 -- # killprocess 2090890 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2090890 ']' 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2090890 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@953 -- # uname 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2090890 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2090890' 00:36:24.278 killing process with pid 2090890 00:36:24.278 02:42:14 compress_isal -- common/autotest_common.sh@967 -- # kill 2090890 00:36:24.278 Received shutdown signal, test time was about 3.000000 seconds 00:36:24.278 00:36:24.278 Latency(us) 00:36:24.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.278 =================================================================================================================== 00:36:24.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:24.279 02:42:14 compress_isal -- common/autotest_common.sh@972 -- # wait 2090890 00:36:28.467 02:42:18 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:36:28.467 02:42:18 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:36:28.467 02:42:18 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2093244 00:36:28.467 02:42:18 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:28.467 02:42:18 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:36:28.467 02:42:18 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2093244 00:36:28.467 02:42:18 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2093244 ']' 00:36:28.467 02:42:18 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.467 02:42:18 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:28.467 02:42:18 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.467 02:42:18 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:28.467 02:42:18 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:36:28.467 [2024-07-11 02:42:18.408458] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:36:28.467 [2024-07-11 02:42:18.408526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093244 ] 00:36:28.467 [2024-07-11 02:42:18.555152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:28.467 [2024-07-11 02:42:18.614224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:28.467 [2024-07-11 02:42:18.614230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.034 02:42:19 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:29.034 02:42:19 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:36:29.034 02:42:19 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:36:29.034 02:42:19 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:29.034 02:42:19 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:32.323 02:42:22 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:36:32.323 02:42:22 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:36:32.323 02:42:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:32.323 02:42:22 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:32.323 02:42:22 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:32.323 02:42:22 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:32.323 02:42:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:32.582 02:42:22 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:36:32.582 [ 00:36:32.582 { 00:36:32.582 "name": "Nvme0n1", 00:36:32.582 "aliases": [ 00:36:32.582 "2c53d8d0-9cc3-46c8-996d-ead0a0d953a3" 00:36:32.582 ], 00:36:32.582 "product_name": "NVMe disk", 00:36:32.582 "block_size": 512, 00:36:32.582 "num_blocks": 7814037168, 00:36:32.582 "uuid": "2c53d8d0-9cc3-46c8-996d-ead0a0d953a3", 00:36:32.582 "assigned_rate_limits": { 00:36:32.582 "rw_ios_per_sec": 0, 00:36:32.582 "rw_mbytes_per_sec": 0, 00:36:32.582 "r_mbytes_per_sec": 0, 00:36:32.582 "w_mbytes_per_sec": 0 00:36:32.582 }, 00:36:32.582 "claimed": false, 00:36:32.582 "zoned": false, 00:36:32.582 "supported_io_types": { 00:36:32.582 "read": true, 00:36:32.582 "write": true, 00:36:32.582 "unmap": true, 00:36:32.582 "flush": true, 00:36:32.582 "reset": true, 00:36:32.582 "nvme_admin": true, 00:36:32.582 "nvme_io": true, 00:36:32.582 "nvme_io_md": false, 00:36:32.582 "write_zeroes": true, 00:36:32.582 "zcopy": false, 00:36:32.582 "get_zone_info": false, 00:36:32.582 "zone_management": false, 00:36:32.582 "zone_append": false, 00:36:32.582 "compare": false, 00:36:32.582 "compare_and_write": false, 00:36:32.582 "abort": true, 00:36:32.582 "seek_hole": false, 00:36:32.582 "seek_data": false, 00:36:32.582 "copy": false, 00:36:32.582 "nvme_iov_md": false 00:36:32.582 }, 00:36:32.582 "driver_specific": { 00:36:32.582 "nvme": [ 00:36:32.582 { 00:36:32.582 "pci_address": "0000:1a:00.0", 00:36:32.582 "trid": { 00:36:32.582 "trtype": "PCIe", 00:36:32.582 "traddr": "0000:1a:00.0" 00:36:32.582 }, 00:36:32.582 "ctrlr_data": { 00:36:32.582 "cntlid": 0, 00:36:32.582 "vendor_id": "0x8086", 00:36:32.582 "model_number": "INTEL SSDPE2KX040T8", 00:36:32.582 "serial_number": "BTLJ8303085V4P0DGN", 00:36:32.582 "firmware_revision": "VDV10170", 00:36:32.582 "oacs": { 00:36:32.582 "security": 0, 00:36:32.582 "format": 1, 00:36:32.582 "firmware": 1, 00:36:32.582 "ns_manage": 1 00:36:32.583 }, 00:36:32.583 "multi_ctrlr": false, 00:36:32.583 "ana_reporting": false 00:36:32.583 }, 00:36:32.583 "vs": { 00:36:32.583 "nvme_version": "1.2" 00:36:32.583 }, 00:36:32.583 "ns_data": { 00:36:32.583 "id": 1, 00:36:32.583 "can_share": false 00:36:32.583 } 00:36:32.583 } 00:36:32.583 ], 00:36:32.583 "mp_policy": "active_passive" 00:36:32.583 } 00:36:32.583 } 00:36:32.583 ] 00:36:32.842 02:42:23 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:32.842 02:42:23 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:36:34.747 38b00276-42bf-4385-a859-b7fa7c314751 00:36:34.747 02:42:25 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:35.007 3691c615-ee90-4040-9e37-078238312072 00:36:35.007 02:42:25 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:35.007 02:42:25 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:36:35.007 02:42:25 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:35.007 02:42:25 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:35.007 02:42:25 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:35.007 02:42:25 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:35.007 02:42:25 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:35.265 02:42:25 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:35.525 [ 00:36:35.525 { 00:36:35.525 "name": "3691c615-ee90-4040-9e37-078238312072", 00:36:35.525 "aliases": [ 00:36:35.525 "lvs0/lv0" 00:36:35.525 ], 00:36:35.525 "product_name": "Logical Volume", 00:36:35.525 "block_size": 512, 00:36:35.525 "num_blocks": 204800, 00:36:35.525 "uuid": "3691c615-ee90-4040-9e37-078238312072", 00:36:35.525 "assigned_rate_limits": { 00:36:35.525 "rw_ios_per_sec": 0, 00:36:35.525 "rw_mbytes_per_sec": 0, 00:36:35.525 "r_mbytes_per_sec": 0, 00:36:35.525 "w_mbytes_per_sec": 0 00:36:35.525 }, 00:36:35.525 "claimed": false, 00:36:35.525 "zoned": false, 00:36:35.525 "supported_io_types": { 00:36:35.525 "read": true, 00:36:35.525 "write": true, 00:36:35.525 "unmap": true, 00:36:35.525 "flush": false, 00:36:35.525 "reset": true, 00:36:35.525 "nvme_admin": false, 00:36:35.525 "nvme_io": false, 00:36:35.525 "nvme_io_md": false, 00:36:35.525 "write_zeroes": true, 00:36:35.525 "zcopy": false, 00:36:35.525 "get_zone_info": false, 00:36:35.525 "zone_management": false, 00:36:35.525 "zone_append": false, 00:36:35.525 "compare": false, 00:36:35.525 "compare_and_write": false, 00:36:35.525 "abort": false, 00:36:35.525 "seek_hole": true, 00:36:35.525 "seek_data": true, 00:36:35.525 "copy": false, 00:36:35.525 "nvme_iov_md": false 00:36:35.525 }, 00:36:35.525 "driver_specific": { 00:36:35.525 "lvol": { 00:36:35.525 "lvol_store_uuid": "38b00276-42bf-4385-a859-b7fa7c314751", 00:36:35.525 "base_bdev": "Nvme0n1", 00:36:35.525 "thin_provision": true, 00:36:35.525 "num_allocated_clusters": 0, 00:36:35.525 "snapshot": false, 00:36:35.525 "clone": false, 00:36:35.525 "esnap_clone": false 00:36:35.525 } 00:36:35.525 } 00:36:35.525 } 00:36:35.525 ] 00:36:35.525 02:42:25 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:35.525 02:42:25 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:36:35.525 02:42:25 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:36:35.784 [2024-07-11 02:42:26.103499] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:35.784 COMP_lvs0/lv0 00:36:35.784 02:42:26 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:35.784 02:42:26 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:36:35.784 02:42:26 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:35.784 02:42:26 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:35.784 02:42:26 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:35.784 02:42:26 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:35.784 02:42:26 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:36.043 02:42:26 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:36.303 [ 00:36:36.303 { 00:36:36.303 "name": "COMP_lvs0/lv0", 00:36:36.303 "aliases": [ 00:36:36.303 "3a44bcd1-0597-5fc3-b23e-11a30a016e05" 00:36:36.303 ], 00:36:36.303 "product_name": "compress", 00:36:36.303 "block_size": 4096, 00:36:36.303 "num_blocks": 25088, 00:36:36.303 "uuid": "3a44bcd1-0597-5fc3-b23e-11a30a016e05", 00:36:36.303 "assigned_rate_limits": { 00:36:36.303 "rw_ios_per_sec": 0, 00:36:36.303 "rw_mbytes_per_sec": 0, 00:36:36.303 "r_mbytes_per_sec": 0, 00:36:36.303 "w_mbytes_per_sec": 0 00:36:36.303 }, 00:36:36.303 "claimed": false, 00:36:36.303 "zoned": false, 00:36:36.303 "supported_io_types": { 00:36:36.303 "read": true, 00:36:36.303 "write": true, 00:36:36.303 "unmap": false, 00:36:36.303 "flush": false, 00:36:36.303 "reset": false, 00:36:36.303 "nvme_admin": false, 00:36:36.303 "nvme_io": false, 00:36:36.303 "nvme_io_md": false, 00:36:36.303 "write_zeroes": true, 00:36:36.303 "zcopy": false, 00:36:36.303 "get_zone_info": false, 00:36:36.303 "zone_management": false, 00:36:36.303 "zone_append": false, 00:36:36.303 "compare": false, 00:36:36.303 "compare_and_write": false, 00:36:36.303 "abort": false, 00:36:36.303 "seek_hole": false, 00:36:36.303 "seek_data": false, 00:36:36.304 "copy": false, 00:36:36.304 "nvme_iov_md": false 00:36:36.304 }, 00:36:36.304 "driver_specific": { 00:36:36.304 "compress": { 00:36:36.304 "name": "COMP_lvs0/lv0", 00:36:36.304 "base_bdev_name": "3691c615-ee90-4040-9e37-078238312072" 00:36:36.304 } 00:36:36.304 } 00:36:36.304 } 00:36:36.304 ] 00:36:36.304 02:42:26 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:36.304 02:42:26 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:36.304 Running I/O for 3 seconds... 00:36:39.677 00:36:39.677 Latency(us) 00:36:39.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.677 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:36:39.677 Verification LBA range: start 0x0 length 0x3100 00:36:39.677 COMP_lvs0/lv0 : 3.02 1268.28 4.95 0.00 0.00 25087.86 183.43 28038.01 00:36:39.677 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:36:39.677 Verification LBA range: start 0x3100 length 0x3100 00:36:39.677 COMP_lvs0/lv0 : 3.01 1267.86 4.95 0.00 0.00 25094.67 388.23 27012.23 00:36:39.677 =================================================================================================================== 00:36:39.677 Total : 2536.14 9.91 0.00 0.00 25091.26 183.43 28038.01 00:36:39.677 0 00:36:39.677 02:42:29 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:36:39.677 02:42:29 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:36:39.677 02:42:29 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:36:39.937 02:42:30 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:39.937 02:42:30 compress_isal -- compress/compress.sh@78 -- # killprocess 2093244 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2093244 ']' 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2093244 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@953 -- # uname 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2093244 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2093244' 00:36:39.937 killing process with pid 2093244 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@967 -- # kill 2093244 00:36:39.937 Received shutdown signal, test time was about 3.000000 seconds 00:36:39.937 00:36:39.937 Latency(us) 00:36:39.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.937 =================================================================================================================== 00:36:39.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.937 02:42:30 compress_isal -- common/autotest_common.sh@972 -- # wait 2093244 00:36:44.132 02:42:34 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:36:44.132 02:42:34 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:36:44.132 02:42:34 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=2095344 00:36:44.132 02:42:34 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:36:44.132 02:42:34 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:44.132 02:42:34 compress_isal -- compress/compress.sh@57 -- # waitforlisten 2095344 00:36:44.132 02:42:34 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2095344 ']' 00:36:44.132 02:42:34 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.132 02:42:34 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:44.132 02:42:34 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.132 02:42:34 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:44.132 02:42:34 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:36:44.132 [2024-07-11 02:42:34.245051] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:36:44.132 [2024-07-11 02:42:34.245191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095344 ] 00:36:44.132 [2024-07-11 02:42:34.460131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:44.132 [2024-07-11 02:42:34.518748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.132 [2024-07-11 02:42:34.518850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.132 [2024-07-11 02:42:34.518851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.069 02:42:35 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:45.069 02:42:35 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:36:45.069 02:42:35 compress_isal -- compress/compress.sh@58 -- # create_vols 00:36:45.069 02:42:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:45.069 02:42:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:48.360 02:42:38 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:36:48.360 02:42:38 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:36:48.360 02:42:38 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:48.360 02:42:38 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:48.360 02:42:38 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:48.360 02:42:38 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:48.360 02:42:38 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:48.360 02:42:38 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:36:48.620 [ 00:36:48.620 { 00:36:48.620 "name": "Nvme0n1", 00:36:48.620 "aliases": [ 00:36:48.620 "f43ff416-c189-404c-95b6-8e270d2d84bf" 00:36:48.620 ], 00:36:48.620 "product_name": "NVMe disk", 00:36:48.620 "block_size": 512, 00:36:48.620 "num_blocks": 7814037168, 00:36:48.620 "uuid": "f43ff416-c189-404c-95b6-8e270d2d84bf", 00:36:48.620 "assigned_rate_limits": { 00:36:48.620 "rw_ios_per_sec": 0, 00:36:48.620 "rw_mbytes_per_sec": 0, 00:36:48.620 "r_mbytes_per_sec": 0, 00:36:48.620 "w_mbytes_per_sec": 0 00:36:48.620 }, 00:36:48.620 "claimed": false, 00:36:48.620 "zoned": false, 00:36:48.620 "supported_io_types": { 00:36:48.620 "read": true, 00:36:48.620 "write": true, 00:36:48.620 "unmap": true, 00:36:48.620 "flush": true, 00:36:48.620 "reset": true, 00:36:48.620 "nvme_admin": true, 00:36:48.620 "nvme_io": true, 00:36:48.620 "nvme_io_md": false, 00:36:48.620 "write_zeroes": true, 00:36:48.620 "zcopy": false, 00:36:48.620 "get_zone_info": false, 00:36:48.620 "zone_management": false, 00:36:48.620 "zone_append": false, 00:36:48.620 "compare": false, 00:36:48.620 "compare_and_write": false, 00:36:48.620 "abort": true, 00:36:48.620 "seek_hole": false, 00:36:48.620 "seek_data": false, 00:36:48.620 "copy": false, 00:36:48.620 "nvme_iov_md": false 00:36:48.620 }, 00:36:48.620 "driver_specific": { 00:36:48.620 "nvme": [ 00:36:48.620 { 00:36:48.620 "pci_address": "0000:1a:00.0", 00:36:48.620 "trid": { 00:36:48.620 "trtype": "PCIe", 00:36:48.620 "traddr": "0000:1a:00.0" 00:36:48.620 }, 00:36:48.620 "ctrlr_data": { 00:36:48.620 "cntlid": 0, 00:36:48.620 "vendor_id": "0x8086", 00:36:48.620 "model_number": "INTEL SSDPE2KX040T8", 00:36:48.620 "serial_number": "BTLJ8303085V4P0DGN", 00:36:48.620 "firmware_revision": "VDV10170", 00:36:48.620 "oacs": { 00:36:48.620 "security": 0, 00:36:48.620 "format": 1, 00:36:48.620 "firmware": 1, 00:36:48.620 "ns_manage": 1 00:36:48.620 }, 00:36:48.620 "multi_ctrlr": false, 00:36:48.620 "ana_reporting": false 00:36:48.620 }, 00:36:48.620 "vs": { 00:36:48.620 "nvme_version": "1.2" 00:36:48.620 }, 00:36:48.620 "ns_data": { 00:36:48.620 "id": 1, 00:36:48.620 "can_share": false 00:36:48.620 } 00:36:48.620 } 00:36:48.620 ], 00:36:48.620 "mp_policy": "active_passive" 00:36:48.620 } 00:36:48.620 } 00:36:48.620 ] 00:36:48.620 02:42:38 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:48.620 02:42:38 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:36:50.524 c4ebfe1e-3c5b-482a-9f6b-5fd67f6d9f2e 00:36:50.524 02:42:40 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:50.783 3e3d210d-655f-417e-bf25-c362e5ea1177 00:36:50.783 02:42:41 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:50.783 02:42:41 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:36:50.783 02:42:41 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:50.783 02:42:41 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:50.783 02:42:41 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:50.783 02:42:41 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:50.783 02:42:41 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:51.042 02:42:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:51.301 [ 00:36:51.301 { 00:36:51.301 "name": "3e3d210d-655f-417e-bf25-c362e5ea1177", 00:36:51.301 "aliases": [ 00:36:51.301 "lvs0/lv0" 00:36:51.301 ], 00:36:51.301 "product_name": "Logical Volume", 00:36:51.301 "block_size": 512, 00:36:51.301 "num_blocks": 204800, 00:36:51.301 "uuid": "3e3d210d-655f-417e-bf25-c362e5ea1177", 00:36:51.301 "assigned_rate_limits": { 00:36:51.301 "rw_ios_per_sec": 0, 00:36:51.301 "rw_mbytes_per_sec": 0, 00:36:51.301 "r_mbytes_per_sec": 0, 00:36:51.301 "w_mbytes_per_sec": 0 00:36:51.301 }, 00:36:51.301 "claimed": false, 00:36:51.301 "zoned": false, 00:36:51.301 "supported_io_types": { 00:36:51.301 "read": true, 00:36:51.301 "write": true, 00:36:51.301 "unmap": true, 00:36:51.301 "flush": false, 00:36:51.301 "reset": true, 00:36:51.301 "nvme_admin": false, 00:36:51.301 "nvme_io": false, 00:36:51.301 "nvme_io_md": false, 00:36:51.301 "write_zeroes": true, 00:36:51.301 "zcopy": false, 00:36:51.301 "get_zone_info": false, 00:36:51.301 "zone_management": false, 00:36:51.301 "zone_append": false, 00:36:51.301 "compare": false, 00:36:51.301 "compare_and_write": false, 00:36:51.301 "abort": false, 00:36:51.301 "seek_hole": true, 00:36:51.301 "seek_data": true, 00:36:51.301 "copy": false, 00:36:51.301 "nvme_iov_md": false 00:36:51.301 }, 00:36:51.301 "driver_specific": { 00:36:51.301 "lvol": { 00:36:51.301 "lvol_store_uuid": "c4ebfe1e-3c5b-482a-9f6b-5fd67f6d9f2e", 00:36:51.301 "base_bdev": "Nvme0n1", 00:36:51.301 "thin_provision": true, 00:36:51.301 "num_allocated_clusters": 0, 00:36:51.301 "snapshot": false, 00:36:51.301 "clone": false, 00:36:51.301 "esnap_clone": false 00:36:51.301 } 00:36:51.301 } 00:36:51.301 } 00:36:51.301 ] 00:36:51.301 02:42:41 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:51.301 02:42:41 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:36:51.301 02:42:41 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:36:51.560 [2024-07-11 02:42:41.760514] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:51.560 COMP_lvs0/lv0 00:36:51.560 02:42:41 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:51.560 02:42:41 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:36:51.560 02:42:41 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:51.560 02:42:41 compress_isal -- common/autotest_common.sh@899 -- # local i 00:36:51.560 02:42:41 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:51.560 02:42:41 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:51.560 02:42:41 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:51.820 02:42:42 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:52.080 [ 00:36:52.080 { 00:36:52.080 "name": "COMP_lvs0/lv0", 00:36:52.080 "aliases": [ 00:36:52.080 "f227977c-cf80-57b4-bc7e-97b8939554cc" 00:36:52.080 ], 00:36:52.080 "product_name": "compress", 00:36:52.080 "block_size": 512, 00:36:52.080 "num_blocks": 200704, 00:36:52.080 "uuid": "f227977c-cf80-57b4-bc7e-97b8939554cc", 00:36:52.080 "assigned_rate_limits": { 00:36:52.080 "rw_ios_per_sec": 0, 00:36:52.080 "rw_mbytes_per_sec": 0, 00:36:52.080 "r_mbytes_per_sec": 0, 00:36:52.080 "w_mbytes_per_sec": 0 00:36:52.080 }, 00:36:52.080 "claimed": false, 00:36:52.080 "zoned": false, 00:36:52.080 "supported_io_types": { 00:36:52.080 "read": true, 00:36:52.080 "write": true, 00:36:52.080 "unmap": false, 00:36:52.080 "flush": false, 00:36:52.080 "reset": false, 00:36:52.080 "nvme_admin": false, 00:36:52.080 "nvme_io": false, 00:36:52.080 "nvme_io_md": false, 00:36:52.080 "write_zeroes": true, 00:36:52.080 "zcopy": false, 00:36:52.080 "get_zone_info": false, 00:36:52.080 "zone_management": false, 00:36:52.080 "zone_append": false, 00:36:52.080 "compare": false, 00:36:52.080 "compare_and_write": false, 00:36:52.080 "abort": false, 00:36:52.080 "seek_hole": false, 00:36:52.080 "seek_data": false, 00:36:52.080 "copy": false, 00:36:52.080 "nvme_iov_md": false 00:36:52.080 }, 00:36:52.080 "driver_specific": { 00:36:52.080 "compress": { 00:36:52.080 "name": "COMP_lvs0/lv0", 00:36:52.080 "base_bdev_name": "3e3d210d-655f-417e-bf25-c362e5ea1177" 00:36:52.080 } 00:36:52.080 } 00:36:52.080 } 00:36:52.080 ] 00:36:52.080 02:42:42 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:36:52.080 02:42:42 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:36:52.080 I/O targets: 00:36:52.080 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:36:52.080 00:36:52.080 00:36:52.080 CUnit - A unit testing framework for C - Version 2.1-3 00:36:52.080 http://cunit.sourceforge.net/ 00:36:52.080 00:36:52.080 00:36:52.080 Suite: bdevio tests on: COMP_lvs0/lv0 00:36:52.080 Test: blockdev write read block ...passed 00:36:52.080 Test: blockdev write zeroes read block ...passed 00:36:52.080 Test: blockdev write zeroes read no split ...passed 00:36:52.080 Test: blockdev write zeroes read split ...passed 00:36:52.339 Test: blockdev write zeroes read split partial ...passed 00:36:52.339 Test: blockdev reset ...[2024-07-11 02:42:42.562156] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:36:52.339 passed 00:36:52.339 Test: blockdev write read 8 blocks ...passed 00:36:52.339 Test: blockdev write read size > 128k ...passed 00:36:52.339 Test: blockdev write read invalid size ...passed 00:36:52.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:52.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:52.339 Test: blockdev write read max offset ...passed 00:36:52.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:52.339 Test: blockdev writev readv 8 blocks ...passed 00:36:52.339 Test: blockdev writev readv 30 x 1block ...passed 00:36:52.339 Test: blockdev writev readv block ...passed 00:36:52.339 Test: blockdev writev readv size > 128k ...passed 00:36:52.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:52.339 Test: blockdev comparev and writev ...passed 00:36:52.339 Test: blockdev nvme passthru rw ...passed 00:36:52.339 Test: blockdev nvme passthru vendor specific ...passed 00:36:52.339 Test: blockdev nvme admin passthru ...passed 00:36:52.339 Test: blockdev copy ...passed 00:36:52.339 00:36:52.339 Run Summary: Type Total Ran Passed Failed Inactive 00:36:52.339 suites 1 1 n/a 0 0 00:36:52.339 tests 23 23 23 0 0 00:36:52.339 asserts 130 130 130 0 n/a 00:36:52.339 00:36:52.339 Elapsed time = 0.402 seconds 00:36:52.339 0 00:36:52.339 02:42:42 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:36:52.339 02:42:42 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:36:52.598 02:42:42 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:36:52.857 02:42:43 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:36:52.857 02:42:43 compress_isal -- compress/compress.sh@62 -- # killprocess 2095344 00:36:52.857 02:42:43 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2095344 ']' 00:36:52.857 02:42:43 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2095344 00:36:52.857 02:42:43 compress_isal -- common/autotest_common.sh@953 -- # uname 00:36:52.857 02:42:43 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:52.857 02:42:43 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2095344 00:36:52.857 02:42:43 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:52.858 02:42:43 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:52.858 02:42:43 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2095344' 00:36:52.858 killing process with pid 2095344 00:36:52.858 02:42:43 compress_isal -- common/autotest_common.sh@967 -- # kill 2095344 00:36:52.858 02:42:43 compress_isal -- common/autotest_common.sh@972 -- # wait 2095344 00:36:57.043 02:42:46 compress_isal -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:36:57.043 02:42:46 compress_isal -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:36:57.043 02:42:46 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:36:57.043 02:42:46 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2096961 00:36:57.043 02:42:46 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:57.043 02:42:46 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 00:36:57.043 02:42:46 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2096961 00:36:57.043 02:42:46 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2096961 ']' 00:36:57.043 02:42:46 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.043 02:42:46 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:57.043 02:42:46 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.043 02:42:46 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:57.043 02:42:46 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:36:57.043 [2024-07-11 02:42:47.033732] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:36:57.043 [2024-07-11 02:42:47.033804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096961 ] 00:36:57.043 [2024-07-11 02:42:47.161445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:57.043 [2024-07-11 02:42:47.229718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:57.043 [2024-07-11 02:42:47.229726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.611 02:42:47 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:57.611 02:42:47 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:36:57.611 02:42:47 compress_isal -- compress/compress.sh@74 -- # create_vols 00:36:57.611 02:42:47 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:57.611 02:42:47 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:00.900 02:42:51 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:00.900 02:42:51 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:37:00.900 02:42:51 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:00.900 02:42:51 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:00.900 02:42:51 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:00.900 02:42:51 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:00.900 02:42:51 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:00.900 02:42:51 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:01.159 [ 00:37:01.159 { 00:37:01.159 "name": "Nvme0n1", 00:37:01.159 "aliases": [ 00:37:01.159 "41176678-d6c5-4670-ae9b-c569b2672e11" 00:37:01.159 ], 00:37:01.159 "product_name": "NVMe disk", 00:37:01.159 "block_size": 512, 00:37:01.159 "num_blocks": 7814037168, 00:37:01.159 "uuid": "41176678-d6c5-4670-ae9b-c569b2672e11", 00:37:01.159 "assigned_rate_limits": { 00:37:01.159 "rw_ios_per_sec": 0, 00:37:01.159 "rw_mbytes_per_sec": 0, 00:37:01.159 "r_mbytes_per_sec": 0, 00:37:01.159 "w_mbytes_per_sec": 0 00:37:01.159 }, 00:37:01.159 "claimed": false, 00:37:01.159 "zoned": false, 00:37:01.159 "supported_io_types": { 00:37:01.159 "read": true, 00:37:01.159 "write": true, 00:37:01.159 "unmap": true, 00:37:01.159 "flush": true, 00:37:01.159 "reset": true, 00:37:01.159 "nvme_admin": true, 00:37:01.159 "nvme_io": true, 00:37:01.159 "nvme_io_md": false, 00:37:01.159 "write_zeroes": true, 00:37:01.159 "zcopy": false, 00:37:01.159 "get_zone_info": false, 00:37:01.159 "zone_management": false, 00:37:01.159 "zone_append": false, 00:37:01.159 "compare": false, 00:37:01.159 "compare_and_write": false, 00:37:01.159 "abort": true, 00:37:01.159 "seek_hole": false, 00:37:01.159 "seek_data": false, 00:37:01.159 "copy": false, 00:37:01.159 "nvme_iov_md": false 00:37:01.159 }, 00:37:01.159 "driver_specific": { 00:37:01.159 "nvme": [ 00:37:01.159 { 00:37:01.159 "pci_address": "0000:1a:00.0", 00:37:01.159 "trid": { 00:37:01.159 "trtype": "PCIe", 00:37:01.159 "traddr": "0000:1a:00.0" 00:37:01.159 }, 00:37:01.159 "ctrlr_data": { 00:37:01.159 "cntlid": 0, 00:37:01.159 "vendor_id": "0x8086", 00:37:01.159 "model_number": "INTEL SSDPE2KX040T8", 00:37:01.159 "serial_number": "BTLJ8303085V4P0DGN", 00:37:01.159 "firmware_revision": "VDV10170", 00:37:01.159 "oacs": { 00:37:01.159 "security": 0, 00:37:01.159 "format": 1, 00:37:01.159 "firmware": 1, 00:37:01.159 "ns_manage": 1 00:37:01.159 }, 00:37:01.159 "multi_ctrlr": false, 00:37:01.159 "ana_reporting": false 00:37:01.159 }, 00:37:01.159 "vs": { 00:37:01.159 "nvme_version": "1.2" 00:37:01.159 }, 00:37:01.159 "ns_data": { 00:37:01.159 "id": 1, 00:37:01.159 "can_share": false 00:37:01.159 } 00:37:01.159 } 00:37:01.159 ], 00:37:01.159 "mp_policy": "active_passive" 00:37:01.159 } 00:37:01.159 } 00:37:01.159 ] 00:37:01.159 02:42:51 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:01.159 02:42:51 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:03.065 787832d8-fdc6-42a8-8924-60b75d26d32c 00:37:03.065 02:42:53 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:03.323 9be20635-9aab-464a-8be1-ace6a0cf4098 00:37:03.323 02:42:53 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:03.323 02:42:53 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:37:03.323 02:42:53 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:03.323 02:42:53 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:03.323 02:42:53 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:03.323 02:42:53 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:03.323 02:42:53 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:03.581 02:42:53 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:03.841 [ 00:37:03.841 { 00:37:03.841 "name": "9be20635-9aab-464a-8be1-ace6a0cf4098", 00:37:03.841 "aliases": [ 00:37:03.841 "lvs0/lv0" 00:37:03.841 ], 00:37:03.841 "product_name": "Logical Volume", 00:37:03.841 "block_size": 512, 00:37:03.841 "num_blocks": 204800, 00:37:03.841 "uuid": "9be20635-9aab-464a-8be1-ace6a0cf4098", 00:37:03.841 "assigned_rate_limits": { 00:37:03.841 "rw_ios_per_sec": 0, 00:37:03.841 "rw_mbytes_per_sec": 0, 00:37:03.841 "r_mbytes_per_sec": 0, 00:37:03.841 "w_mbytes_per_sec": 0 00:37:03.841 }, 00:37:03.841 "claimed": false, 00:37:03.841 "zoned": false, 00:37:03.841 "supported_io_types": { 00:37:03.841 "read": true, 00:37:03.841 "write": true, 00:37:03.841 "unmap": true, 00:37:03.841 "flush": false, 00:37:03.841 "reset": true, 00:37:03.841 "nvme_admin": false, 00:37:03.841 "nvme_io": false, 00:37:03.841 "nvme_io_md": false, 00:37:03.841 "write_zeroes": true, 00:37:03.841 "zcopy": false, 00:37:03.841 "get_zone_info": false, 00:37:03.841 "zone_management": false, 00:37:03.841 "zone_append": false, 00:37:03.841 "compare": false, 00:37:03.841 "compare_and_write": false, 00:37:03.841 "abort": false, 00:37:03.841 "seek_hole": true, 00:37:03.841 "seek_data": true, 00:37:03.841 "copy": false, 00:37:03.841 "nvme_iov_md": false 00:37:03.841 }, 00:37:03.841 "driver_specific": { 00:37:03.841 "lvol": { 00:37:03.841 "lvol_store_uuid": "787832d8-fdc6-42a8-8924-60b75d26d32c", 00:37:03.841 "base_bdev": "Nvme0n1", 00:37:03.841 "thin_provision": true, 00:37:03.841 "num_allocated_clusters": 0, 00:37:03.841 "snapshot": false, 00:37:03.841 "clone": false, 00:37:03.841 "esnap_clone": false 00:37:03.841 } 00:37:03.841 } 00:37:03.841 } 00:37:03.841 ] 00:37:03.841 02:42:54 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:03.841 02:42:54 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:37:03.841 02:42:54 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:37:03.841 [2024-07-11 02:42:54.212272] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:03.841 COMP_lvs0/lv0 00:37:03.841 02:42:54 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:03.841 02:42:54 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:37:03.841 02:42:54 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:03.841 02:42:54 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:03.841 02:42:54 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:03.841 02:42:54 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:03.841 02:42:54 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:04.100 02:42:54 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:04.359 [ 00:37:04.359 { 00:37:04.359 "name": "COMP_lvs0/lv0", 00:37:04.359 "aliases": [ 00:37:04.359 "8cd961c2-6ca4-5493-8391-095cd66ec883" 00:37:04.359 ], 00:37:04.359 "product_name": "compress", 00:37:04.359 "block_size": 512, 00:37:04.359 "num_blocks": 200704, 00:37:04.359 "uuid": "8cd961c2-6ca4-5493-8391-095cd66ec883", 00:37:04.359 "assigned_rate_limits": { 00:37:04.359 "rw_ios_per_sec": 0, 00:37:04.359 "rw_mbytes_per_sec": 0, 00:37:04.359 "r_mbytes_per_sec": 0, 00:37:04.359 "w_mbytes_per_sec": 0 00:37:04.359 }, 00:37:04.359 "claimed": false, 00:37:04.359 "zoned": false, 00:37:04.359 "supported_io_types": { 00:37:04.359 "read": true, 00:37:04.359 "write": true, 00:37:04.359 "unmap": false, 00:37:04.359 "flush": false, 00:37:04.359 "reset": false, 00:37:04.359 "nvme_admin": false, 00:37:04.359 "nvme_io": false, 00:37:04.359 "nvme_io_md": false, 00:37:04.359 "write_zeroes": true, 00:37:04.359 "zcopy": false, 00:37:04.359 "get_zone_info": false, 00:37:04.359 "zone_management": false, 00:37:04.359 "zone_append": false, 00:37:04.359 "compare": false, 00:37:04.359 "compare_and_write": false, 00:37:04.359 "abort": false, 00:37:04.359 "seek_hole": false, 00:37:04.359 "seek_data": false, 00:37:04.359 "copy": false, 00:37:04.359 "nvme_iov_md": false 00:37:04.359 }, 00:37:04.359 "driver_specific": { 00:37:04.359 "compress": { 00:37:04.359 "name": "COMP_lvs0/lv0", 00:37:04.359 "base_bdev_name": "9be20635-9aab-464a-8be1-ace6a0cf4098" 00:37:04.359 } 00:37:04.359 } 00:37:04.359 } 00:37:04.359 ] 00:37:04.359 02:42:54 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:04.359 02:42:54 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:04.359 Running I/O for 30 seconds... 00:37:36.448 00:37:36.448 Latency(us) 00:37:36.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.448 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:37:36.448 Verification LBA range: start 0x0 length 0xc40 00:37:36.448 COMP_lvs0/lv0 : 30.05 432.65 6.76 0.00 0.00 147509.66 11055.64 103945.79 00:37:36.448 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:37:36.448 Verification LBA range: start 0xc40 length 0xc40 00:37:36.448 COMP_lvs0/lv0 : 30.04 1704.26 26.63 0.00 0.00 37253.14 3989.15 80694.76 00:37:36.448 =================================================================================================================== 00:37:36.448 Total : 2136.91 33.39 0.00 0.00 59580.99 3989.15 103945.79 00:37:36.448 0 00:37:36.448 02:43:24 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:36.448 02:43:24 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:36.448 02:43:25 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:36.448 02:43:25 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:36.448 02:43:25 compress_isal -- compress/compress.sh@78 -- # killprocess 2096961 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2096961 ']' 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2096961 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@953 -- # uname 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2096961 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2096961' 00:37:36.448 killing process with pid 2096961 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@967 -- # kill 2096961 00:37:36.448 Received shutdown signal, test time was about 30.000000 seconds 00:37:36.448 00:37:36.448 Latency(us) 00:37:36.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.448 =================================================================================================================== 00:37:36.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.448 02:43:25 compress_isal -- common/autotest_common.sh@972 -- # wait 2096961 00:37:39.006 02:43:29 compress_isal -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:37:39.006 02:43:29 compress_isal -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:37:39.006 02:43:29 compress_isal -- compress/compress.sh@96 -- # NET_TYPE=virt 00:37:39.006 02:43:29 compress_isal -- compress/compress.sh@96 -- # nvmftestinit 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.006 02:43:29 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:39.006 02:43:29 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:39.006 Cannot find device "nvmf_tgt_br" 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@155 -- # true 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:39.006 Cannot find device "nvmf_tgt_br2" 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@156 -- # true 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:39.006 Cannot find device "nvmf_tgt_br" 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@158 -- # true 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:39.006 Cannot find device "nvmf_tgt_br2" 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@159 -- # true 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:39.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@162 -- # true 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:39.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@163 -- # true 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:39.006 02:43:29 compress_isal -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:39.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:39.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:37:39.289 00:37:39.289 --- 10.0.0.2 ping statistics --- 00:37:39.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.289 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:39.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:39.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:37:39.289 00:37:39.289 --- 10.0.0.3 ping statistics --- 00:37:39.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.289 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:39.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:39.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:37:39.289 00:37:39.289 --- 10.0.0.1 ping statistics --- 00:37:39.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:39.289 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@433 -- # return 0 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:39.289 02:43:29 compress_isal -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:39.548 02:43:29 compress_isal -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:37:39.548 02:43:29 compress_isal -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:39.548 02:43:29 compress_isal -- nvmf/common.sh@481 -- # nvmfpid=2102468 00:37:39.548 02:43:29 compress_isal -- nvmf/common.sh@482 -- # waitforlisten 2102468 00:37:39.548 02:43:29 compress_isal -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2102468 ']' 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:39.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:39.548 02:43:29 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:39.548 [2024-07-11 02:43:29.803591] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:37:39.548 [2024-07-11 02:43:29.803661] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:39.548 [2024-07-11 02:43:29.950208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:39.808 [2024-07-11 02:43:30.001710] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.808 [2024-07-11 02:43:30.001769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.808 [2024-07-11 02:43:30.001785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.808 [2024-07-11 02:43:30.001799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.808 [2024-07-11 02:43:30.001809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.808 [2024-07-11 02:43:30.001884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.808 [2024-07-11 02:43:30.002004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:39.808 [2024-07-11 02:43:30.002006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.376 02:43:30 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:40.376 02:43:30 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:37:40.376 02:43:30 compress_isal -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:40.376 02:43:30 compress_isal -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:40.376 02:43:30 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:40.376 02:43:30 compress_isal -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:40.376 02:43:30 compress_isal -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:40.376 02:43:30 compress_isal -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:37:40.635 [2024-07-11 02:43:30.884084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:40.635 02:43:30 compress_isal -- compress/compress.sh@102 -- # create_vols 00:37:40.635 02:43:30 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:40.635 02:43:30 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:43.925 02:43:33 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:43.925 02:43:33 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:37:43.925 02:43:33 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:43.925 02:43:33 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:43.925 02:43:33 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:43.925 02:43:33 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:43.925 02:43:33 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:43.925 02:43:34 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:44.184 [ 00:37:44.184 { 00:37:44.184 "name": "Nvme0n1", 00:37:44.184 "aliases": [ 00:37:44.184 "201fb165-13fd-4f08-b1ac-c87441a8a258" 00:37:44.184 ], 00:37:44.184 "product_name": "NVMe disk", 00:37:44.184 "block_size": 512, 00:37:44.184 "num_blocks": 7814037168, 00:37:44.184 "uuid": "201fb165-13fd-4f08-b1ac-c87441a8a258", 00:37:44.184 "assigned_rate_limits": { 00:37:44.184 "rw_ios_per_sec": 0, 00:37:44.184 "rw_mbytes_per_sec": 0, 00:37:44.184 "r_mbytes_per_sec": 0, 00:37:44.184 "w_mbytes_per_sec": 0 00:37:44.184 }, 00:37:44.184 "claimed": false, 00:37:44.184 "zoned": false, 00:37:44.184 "supported_io_types": { 00:37:44.184 "read": true, 00:37:44.184 "write": true, 00:37:44.184 "unmap": true, 00:37:44.184 "flush": true, 00:37:44.184 "reset": true, 00:37:44.184 "nvme_admin": true, 00:37:44.184 "nvme_io": true, 00:37:44.184 "nvme_io_md": false, 00:37:44.184 "write_zeroes": true, 00:37:44.184 "zcopy": false, 00:37:44.184 "get_zone_info": false, 00:37:44.184 "zone_management": false, 00:37:44.184 "zone_append": false, 00:37:44.184 "compare": false, 00:37:44.184 "compare_and_write": false, 00:37:44.184 "abort": true, 00:37:44.184 "seek_hole": false, 00:37:44.184 "seek_data": false, 00:37:44.184 "copy": false, 00:37:44.184 "nvme_iov_md": false 00:37:44.184 }, 00:37:44.184 "driver_specific": { 00:37:44.184 "nvme": [ 00:37:44.184 { 00:37:44.184 "pci_address": "0000:1a:00.0", 00:37:44.184 "trid": { 00:37:44.184 "trtype": "PCIe", 00:37:44.184 "traddr": "0000:1a:00.0" 00:37:44.184 }, 00:37:44.184 "ctrlr_data": { 00:37:44.184 "cntlid": 0, 00:37:44.184 "vendor_id": "0x8086", 00:37:44.184 "model_number": "INTEL SSDPE2KX040T8", 00:37:44.184 "serial_number": "BTLJ8303085V4P0DGN", 00:37:44.184 "firmware_revision": "VDV10170", 00:37:44.184 "oacs": { 00:37:44.184 "security": 0, 00:37:44.184 "format": 1, 00:37:44.184 "firmware": 1, 00:37:44.184 "ns_manage": 1 00:37:44.184 }, 00:37:44.184 "multi_ctrlr": false, 00:37:44.184 "ana_reporting": false 00:37:44.184 }, 00:37:44.184 "vs": { 00:37:44.184 "nvme_version": "1.2" 00:37:44.184 }, 00:37:44.184 "ns_data": { 00:37:44.184 "id": 1, 00:37:44.184 "can_share": false 00:37:44.184 } 00:37:44.184 } 00:37:44.184 ], 00:37:44.184 "mp_policy": "active_passive" 00:37:44.184 } 00:37:44.184 } 00:37:44.184 ] 00:37:44.184 02:43:34 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:44.184 02:43:34 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:46.088 493052a4-1d33-45e4-95a6-c2b2887ab8d5 00:37:46.346 02:43:36 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:46.606 ca09aba6-223f-46d5-9f21-8bd89f6a48d5 00:37:46.606 02:43:36 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:46.606 02:43:36 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:37:46.606 02:43:36 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:46.606 02:43:36 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:46.606 02:43:36 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:46.606 02:43:36 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:46.606 02:43:36 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:46.865 02:43:37 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:47.123 [ 00:37:47.123 { 00:37:47.123 "name": "ca09aba6-223f-46d5-9f21-8bd89f6a48d5", 00:37:47.123 "aliases": [ 00:37:47.123 "lvs0/lv0" 00:37:47.123 ], 00:37:47.123 "product_name": "Logical Volume", 00:37:47.123 "block_size": 512, 00:37:47.123 "num_blocks": 204800, 00:37:47.123 "uuid": "ca09aba6-223f-46d5-9f21-8bd89f6a48d5", 00:37:47.123 "assigned_rate_limits": { 00:37:47.123 "rw_ios_per_sec": 0, 00:37:47.123 "rw_mbytes_per_sec": 0, 00:37:47.123 "r_mbytes_per_sec": 0, 00:37:47.123 "w_mbytes_per_sec": 0 00:37:47.123 }, 00:37:47.123 "claimed": false, 00:37:47.123 "zoned": false, 00:37:47.123 "supported_io_types": { 00:37:47.123 "read": true, 00:37:47.123 "write": true, 00:37:47.123 "unmap": true, 00:37:47.123 "flush": false, 00:37:47.123 "reset": true, 00:37:47.124 "nvme_admin": false, 00:37:47.124 "nvme_io": false, 00:37:47.124 "nvme_io_md": false, 00:37:47.124 "write_zeroes": true, 00:37:47.124 "zcopy": false, 00:37:47.124 "get_zone_info": false, 00:37:47.124 "zone_management": false, 00:37:47.124 "zone_append": false, 00:37:47.124 "compare": false, 00:37:47.124 "compare_and_write": false, 00:37:47.124 "abort": false, 00:37:47.124 "seek_hole": true, 00:37:47.124 "seek_data": true, 00:37:47.124 "copy": false, 00:37:47.124 "nvme_iov_md": false 00:37:47.124 }, 00:37:47.124 "driver_specific": { 00:37:47.124 "lvol": { 00:37:47.124 "lvol_store_uuid": "493052a4-1d33-45e4-95a6-c2b2887ab8d5", 00:37:47.124 "base_bdev": "Nvme0n1", 00:37:47.124 "thin_provision": true, 00:37:47.124 "num_allocated_clusters": 0, 00:37:47.124 "snapshot": false, 00:37:47.124 "clone": false, 00:37:47.124 "esnap_clone": false 00:37:47.124 } 00:37:47.124 } 00:37:47.124 } 00:37:47.124 ] 00:37:47.124 02:43:37 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:47.124 02:43:37 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:37:47.124 02:43:37 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:37:47.382 [2024-07-11 02:43:37.547741] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:47.382 COMP_lvs0/lv0 00:37:47.382 02:43:37 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:47.382 02:43:37 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:37:47.383 02:43:37 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:47.383 02:43:37 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:47.383 02:43:37 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:47.383 02:43:37 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:47.383 02:43:37 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:47.641 02:43:37 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:47.641 [ 00:37:47.641 { 00:37:47.641 "name": "COMP_lvs0/lv0", 00:37:47.641 "aliases": [ 00:37:47.641 "6abac868-cfe5-5bd0-a459-c07f440070e1" 00:37:47.641 ], 00:37:47.641 "product_name": "compress", 00:37:47.641 "block_size": 512, 00:37:47.641 "num_blocks": 200704, 00:37:47.641 "uuid": "6abac868-cfe5-5bd0-a459-c07f440070e1", 00:37:47.641 "assigned_rate_limits": { 00:37:47.641 "rw_ios_per_sec": 0, 00:37:47.641 "rw_mbytes_per_sec": 0, 00:37:47.641 "r_mbytes_per_sec": 0, 00:37:47.641 "w_mbytes_per_sec": 0 00:37:47.641 }, 00:37:47.641 "claimed": false, 00:37:47.641 "zoned": false, 00:37:47.641 "supported_io_types": { 00:37:47.641 "read": true, 00:37:47.641 "write": true, 00:37:47.641 "unmap": false, 00:37:47.641 "flush": false, 00:37:47.641 "reset": false, 00:37:47.641 "nvme_admin": false, 00:37:47.641 "nvme_io": false, 00:37:47.641 "nvme_io_md": false, 00:37:47.641 "write_zeroes": true, 00:37:47.641 "zcopy": false, 00:37:47.641 "get_zone_info": false, 00:37:47.641 "zone_management": false, 00:37:47.641 "zone_append": false, 00:37:47.641 "compare": false, 00:37:47.641 "compare_and_write": false, 00:37:47.641 "abort": false, 00:37:47.641 "seek_hole": false, 00:37:47.641 "seek_data": false, 00:37:47.641 "copy": false, 00:37:47.641 "nvme_iov_md": false 00:37:47.641 }, 00:37:47.641 "driver_specific": { 00:37:47.641 "compress": { 00:37:47.641 "name": "COMP_lvs0/lv0", 00:37:47.641 "base_bdev_name": "ca09aba6-223f-46d5-9f21-8bd89f6a48d5" 00:37:47.641 } 00:37:47.641 } 00:37:47.641 } 00:37:47.641 ] 00:37:47.900 02:43:38 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:47.900 02:43:38 compress_isal -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:48.159 02:43:38 compress_isal -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:37:48.418 02:43:38 compress_isal -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:48.418 [2024-07-11 02:43:38.833700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.677 02:43:38 compress_isal -- compress/compress.sh@109 -- # perf_pid=2103727 00:37:48.677 02:43:38 compress_isal -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:37:48.677 02:43:38 compress_isal -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:48.677 02:43:38 compress_isal -- compress/compress.sh@113 -- # wait 2103727 00:37:48.935 [2024-07-11 02:43:39.135620] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:21.016 Initializing NVMe Controllers 00:38:21.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:21.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:21.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:21.016 Initialization complete. Launching workers. 00:38:21.016 ======================================================== 00:38:21.016 Latency(us) 00:38:21.016 Device Information : IOPS MiB/s Average min max 00:38:21.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 3757.40 14.68 17035.05 2171.93 35997.17 00:38:21.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2342.43 9.15 27328.60 2162.74 49489.33 00:38:21.016 ======================================================== 00:38:21.016 Total : 6099.83 23.83 20987.94 2162.74 49489.33 00:38:21.016 00:38:21.016 02:44:09 compress_isal -- compress/compress.sh@114 -- # destroy_vols 00:38:21.016 02:44:09 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:21.016 02:44:09 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:21.016 02:44:09 compress_isal -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:38:21.016 02:44:09 compress_isal -- compress/compress.sh@117 -- # nvmftestfini 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@117 -- # sync 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@120 -- # set +e 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:21.016 rmmod nvme_tcp 00:38:21.016 rmmod nvme_fabrics 00:38:21.016 rmmod nvme_keyring 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@124 -- # set -e 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@125 -- # return 0 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@489 -- # '[' -n 2102468 ']' 00:38:21.016 02:44:09 compress_isal -- nvmf/common.sh@490 -- # killprocess 2102468 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2102468 ']' 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2102468 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@953 -- # uname 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2102468 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2102468' 00:38:21.016 killing process with pid 2102468 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@967 -- # kill 2102468 00:38:21.016 02:44:09 compress_isal -- common/autotest_common.sh@972 -- # wait 2102468 00:38:23.549 02:44:13 compress_isal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:23.549 02:44:13 compress_isal -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:23.549 02:44:13 compress_isal -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:23.549 02:44:13 compress_isal -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:23.549 02:44:13 compress_isal -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:23.549 02:44:13 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.549 02:44:13 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:23.549 02:44:13 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.549 02:44:13 compress_isal -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:23.549 02:44:13 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:38:23.549 00:38:23.549 real 2m27.502s 00:38:23.549 user 6m42.112s 00:38:23.549 sys 0m20.652s 00:38:23.549 02:44:13 compress_isal -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:23.550 02:44:13 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:23.550 ************************************ 00:38:23.550 END TEST compress_isal 00:38:23.550 ************************************ 00:38:23.550 02:44:13 -- common/autotest_common.sh@1142 -- # return 0 00:38:23.550 02:44:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:38:23.550 02:44:13 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:38:23.550 02:44:13 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:38:23.550 02:44:13 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:23.550 02:44:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:23.550 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:38:23.550 ************************************ 00:38:23.550 START TEST blockdev_crypto_aesni 00:38:23.550 ************************************ 00:38:23.550 02:44:13 blockdev_crypto_aesni -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:38:23.808 * Looking for test storage... 00:38:23.808 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2108143 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 2108143 00:38:23.808 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:38:23.808 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@829 -- # '[' -z 2108143 ']' 00:38:23.808 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.808 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:23.808 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.808 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:23.808 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:23.808 [2024-07-11 02:44:14.117573] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:23.808 [2024-07-11 02:44:14.117650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108143 ] 00:38:24.067 [2024-07-11 02:44:14.238154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.067 [2024-07-11 02:44:14.285885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.067 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:24.067 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@862 -- # return 0 00:38:24.067 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:38:24.067 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:38:24.067 02:44:14 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:38:24.067 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:24.067 02:44:14 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:24.067 [2024-07-11 02:44:14.330400] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:38:24.067 [2024-07-11 02:44:14.338436] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:38:24.067 [2024-07-11 02:44:14.346453] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:38:24.067 [2024-07-11 02:44:14.417821] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:38:26.601 true 00:38:26.601 true 00:38:26.601 true 00:38:26.601 true 00:38:26.601 Malloc0 00:38:26.601 Malloc1 00:38:26.601 Malloc2 00:38:26.601 Malloc3 00:38:26.601 [2024-07-11 02:44:16.975687] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:38:26.601 crypto_ram 00:38:26.601 [2024-07-11 02:44:16.983703] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:38:26.601 crypto_ram2 00:38:26.601 [2024-07-11 02:44:16.991721] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:38:26.601 crypto_ram3 00:38:26.601 [2024-07-11 02:44:16.999743] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:38:26.601 crypto_ram4 00:38:26.601 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.601 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:38:26.601 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.601 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:26.601 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.601 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:38:26.601 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:38:26.601 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.601 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.870 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.870 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.870 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:38:26.870 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:38:26.870 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:26.870 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.870 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:38:26.870 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:38:26.871 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a6cf9ec7-0ad2-571a-8916-75dca1a91aa9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a6cf9ec7-0ad2-571a-8916-75dca1a91aa9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "12136293-491b-51a6-b42a-daf56914b3f8"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "12136293-491b-51a6-b42a-daf56914b3f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "54f561d3-9bcd-53b8-a6e6-1d029752f41f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "54f561d3-9bcd-53b8-a6e6-1d029752f41f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ac8f4201-15be-5286-b805-e913be13ec64"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ac8f4201-15be-5286-b805-e913be13ec64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:38:26.871 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:38:26.871 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:38:26.871 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:38:26.871 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 2108143 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@948 -- # '[' -z 2108143 ']' 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # kill -0 2108143 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # uname 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2108143 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2108143' 00:38:26.871 killing process with pid 2108143 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # kill 2108143 00:38:26.871 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@972 -- # wait 2108143 00:38:27.440 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:27.440 02:44:17 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:38:27.440 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:38:27.440 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:27.440 02:44:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:27.440 ************************************ 00:38:27.440 START TEST bdev_hello_world 00:38:27.440 ************************************ 00:38:27.440 02:44:17 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:38:27.440 [2024-07-11 02:44:17.851408] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:27.440 [2024-07-11 02:44:17.851466] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108676 ] 00:38:27.699 [2024-07-11 02:44:17.988826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.699 [2024-07-11 02:44:18.039836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.699 [2024-07-11 02:44:18.061100] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:38:27.699 [2024-07-11 02:44:18.069127] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:38:27.699 [2024-07-11 02:44:18.077153] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:38:27.958 [2024-07-11 02:44:18.187988] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:38:30.491 [2024-07-11 02:44:20.597940] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:38:30.491 [2024-07-11 02:44:20.598016] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:38:30.492 [2024-07-11 02:44:20.598031] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:30.492 [2024-07-11 02:44:20.605958] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:38:30.492 [2024-07-11 02:44:20.605979] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:38:30.492 [2024-07-11 02:44:20.605991] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:30.492 [2024-07-11 02:44:20.613979] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:38:30.492 [2024-07-11 02:44:20.613999] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:38:30.492 [2024-07-11 02:44:20.614010] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:30.492 [2024-07-11 02:44:20.622001] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:38:30.492 [2024-07-11 02:44:20.622018] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:38:30.492 [2024-07-11 02:44:20.622029] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:30.492 [2024-07-11 02:44:20.699179] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:38:30.492 [2024-07-11 02:44:20.699219] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:38:30.492 [2024-07-11 02:44:20.699238] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:38:30.492 [2024-07-11 02:44:20.700507] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:38:30.492 [2024-07-11 02:44:20.700591] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:38:30.492 [2024-07-11 02:44:20.700608] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:38:30.492 [2024-07-11 02:44:20.700649] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:38:30.492 00:38:30.492 [2024-07-11 02:44:20.700668] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:38:30.751 00:38:30.751 real 0m3.262s 00:38:30.751 user 0m2.655s 00:38:30.751 sys 0m0.559s 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:38:30.751 ************************************ 00:38:30.751 END TEST bdev_hello_world 00:38:30.751 ************************************ 00:38:30.751 02:44:21 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:38:30.751 02:44:21 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:38:30.751 02:44:21 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:30.751 02:44:21 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:30.751 02:44:21 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:30.751 ************************************ 00:38:30.751 START TEST bdev_bounds 00:38:30.751 ************************************ 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2109057 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2109057' 00:38:30.751 Process bdevio pid: 2109057 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2109057 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2109057 ']' 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:30.751 02:44:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:31.010 [2024-07-11 02:44:21.193829] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:31.010 [2024-07-11 02:44:21.193893] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109057 ] 00:38:31.010 [2024-07-11 02:44:21.329428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:31.010 [2024-07-11 02:44:21.381132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:31.010 [2024-07-11 02:44:21.381235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.010 [2024-07-11 02:44:21.381234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:31.010 [2024-07-11 02:44:21.402626] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:38:31.010 [2024-07-11 02:44:21.410640] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:38:31.010 [2024-07-11 02:44:21.418671] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:38:31.269 [2024-07-11 02:44:21.515648] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:38:33.802 [2024-07-11 02:44:23.892457] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:38:33.802 [2024-07-11 02:44:23.892532] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:38:33.802 [2024-07-11 02:44:23.892546] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:33.802 [2024-07-11 02:44:23.900472] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:38:33.802 [2024-07-11 02:44:23.900491] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:38:33.802 [2024-07-11 02:44:23.900502] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:33.802 [2024-07-11 02:44:23.908492] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:38:33.802 [2024-07-11 02:44:23.908510] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:38:33.802 [2024-07-11 02:44:23.908521] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:33.802 [2024-07-11 02:44:23.916515] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:38:33.802 [2024-07-11 02:44:23.916533] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:38:33.802 [2024-07-11 02:44:23.916544] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:33.802 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:33.802 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:38:33.802 02:44:24 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:38:33.802 I/O targets: 00:38:33.802 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:38:33.802 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:38:33.802 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:38:33.802 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:38:33.802 00:38:33.802 00:38:33.802 CUnit - A unit testing framework for C - Version 2.1-3 00:38:33.802 http://cunit.sourceforge.net/ 00:38:33.802 00:38:33.802 00:38:33.802 Suite: bdevio tests on: crypto_ram4 00:38:33.802 Test: blockdev write read block ...passed 00:38:33.802 Test: blockdev write zeroes read block ...passed 00:38:33.802 Test: blockdev write zeroes read no split ...passed 00:38:33.802 Test: blockdev write zeroes read split ...passed 00:38:33.802 Test: blockdev write zeroes read split partial ...passed 00:38:33.802 Test: blockdev reset ...passed 00:38:33.802 Test: blockdev write read 8 blocks ...passed 00:38:33.802 Test: blockdev write read size > 128k ...passed 00:38:33.802 Test: blockdev write read invalid size ...passed 00:38:33.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:33.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:33.802 Test: blockdev write read max offset ...passed 00:38:33.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:33.802 Test: blockdev writev readv 8 blocks ...passed 00:38:33.802 Test: blockdev writev readv 30 x 1block ...passed 00:38:33.802 Test: blockdev writev readv block ...passed 00:38:33.802 Test: blockdev writev readv size > 128k ...passed 00:38:33.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:33.802 Test: blockdev comparev and writev ...passed 00:38:33.802 Test: blockdev nvme passthru rw ...passed 00:38:33.802 Test: blockdev nvme passthru vendor specific ...passed 00:38:33.802 Test: blockdev nvme admin passthru ...passed 00:38:33.802 Test: blockdev copy ...passed 00:38:33.802 Suite: bdevio tests on: crypto_ram3 00:38:33.802 Test: blockdev write read block ...passed 00:38:33.802 Test: blockdev write zeroes read block ...passed 00:38:33.802 Test: blockdev write zeroes read no split ...passed 00:38:34.062 Test: blockdev write zeroes read split ...passed 00:38:34.062 Test: blockdev write zeroes read split partial ...passed 00:38:34.062 Test: blockdev reset ...passed 00:38:34.062 Test: blockdev write read 8 blocks ...passed 00:38:34.062 Test: blockdev write read size > 128k ...passed 00:38:34.062 Test: blockdev write read invalid size ...passed 00:38:34.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:34.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:34.062 Test: blockdev write read max offset ...passed 00:38:34.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:34.062 Test: blockdev writev readv 8 blocks ...passed 00:38:34.062 Test: blockdev writev readv 30 x 1block ...passed 00:38:34.062 Test: blockdev writev readv block ...passed 00:38:34.062 Test: blockdev writev readv size > 128k ...passed 00:38:34.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:34.062 Test: blockdev comparev and writev ...passed 00:38:34.062 Test: blockdev nvme passthru rw ...passed 00:38:34.062 Test: blockdev nvme passthru vendor specific ...passed 00:38:34.062 Test: blockdev nvme admin passthru ...passed 00:38:34.062 Test: blockdev copy ...passed 00:38:34.062 Suite: bdevio tests on: crypto_ram2 00:38:34.062 Test: blockdev write read block ...passed 00:38:34.062 Test: blockdev write zeroes read block ...passed 00:38:34.062 Test: blockdev write zeroes read no split ...passed 00:38:34.062 Test: blockdev write zeroes read split ...passed 00:38:34.321 Test: blockdev write zeroes read split partial ...passed 00:38:34.321 Test: blockdev reset ...passed 00:38:34.321 Test: blockdev write read 8 blocks ...passed 00:38:34.321 Test: blockdev write read size > 128k ...passed 00:38:34.321 Test: blockdev write read invalid size ...passed 00:38:34.321 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:34.321 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:34.321 Test: blockdev write read max offset ...passed 00:38:34.321 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:34.321 Test: blockdev writev readv 8 blocks ...passed 00:38:34.321 Test: blockdev writev readv 30 x 1block ...passed 00:38:34.321 Test: blockdev writev readv block ...passed 00:38:34.321 Test: blockdev writev readv size > 128k ...passed 00:38:34.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:34.321 Test: blockdev comparev and writev ...passed 00:38:34.321 Test: blockdev nvme passthru rw ...passed 00:38:34.321 Test: blockdev nvme passthru vendor specific ...passed 00:38:34.321 Test: blockdev nvme admin passthru ...passed 00:38:34.321 Test: blockdev copy ...passed 00:38:34.321 Suite: bdevio tests on: crypto_ram 00:38:34.321 Test: blockdev write read block ...passed 00:38:34.321 Test: blockdev write zeroes read block ...passed 00:38:34.321 Test: blockdev write zeroes read no split ...passed 00:38:34.580 Test: blockdev write zeroes read split ...passed 00:38:34.580 Test: blockdev write zeroes read split partial ...passed 00:38:34.580 Test: blockdev reset ...passed 00:38:34.580 Test: blockdev write read 8 blocks ...passed 00:38:34.580 Test: blockdev write read size > 128k ...passed 00:38:34.580 Test: blockdev write read invalid size ...passed 00:38:34.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:34.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:34.580 Test: blockdev write read max offset ...passed 00:38:34.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:34.580 Test: blockdev writev readv 8 blocks ...passed 00:38:34.580 Test: blockdev writev readv 30 x 1block ...passed 00:38:34.580 Test: blockdev writev readv block ...passed 00:38:34.580 Test: blockdev writev readv size > 128k ...passed 00:38:34.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:34.580 Test: blockdev comparev and writev ...passed 00:38:34.580 Test: blockdev nvme passthru rw ...passed 00:38:34.580 Test: blockdev nvme passthru vendor specific ...passed 00:38:34.580 Test: blockdev nvme admin passthru ...passed 00:38:34.580 Test: blockdev copy ...passed 00:38:34.580 00:38:34.580 Run Summary: Type Total Ran Passed Failed Inactive 00:38:34.580 suites 4 4 n/a 0 0 00:38:34.580 tests 92 92 92 0 0 00:38:34.580 asserts 520 520 520 0 n/a 00:38:34.580 00:38:34.580 Elapsed time = 1.584 seconds 00:38:34.580 0 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2109057 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2109057 ']' 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2109057 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2109057 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2109057' 00:38:34.580 killing process with pid 2109057 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2109057 00:38:34.580 02:44:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2109057 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:38:35.149 00:38:35.149 real 0m4.207s 00:38:35.149 user 0m11.235s 00:38:35.149 sys 0m0.741s 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:35.149 ************************************ 00:38:35.149 END TEST bdev_bounds 00:38:35.149 ************************************ 00:38:35.149 02:44:25 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:38:35.149 02:44:25 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:38:35.149 02:44:25 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:38:35.149 02:44:25 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:35.149 02:44:25 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:35.149 ************************************ 00:38:35.149 START TEST bdev_nbd 00:38:35.149 ************************************ 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2109613 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2109613 /var/tmp/spdk-nbd.sock 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2109613 ']' 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:35.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:35.149 02:44:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:35.149 [2024-07-11 02:44:25.493425] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:35.149 [2024-07-11 02:44:25.493487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:35.407 [2024-07-11 02:44:25.629872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.407 [2024-07-11 02:44:25.679494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.407 [2024-07-11 02:44:25.700748] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:38:35.408 [2024-07-11 02:44:25.708787] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:38:35.408 [2024-07-11 02:44:25.716799] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:38:35.408 [2024-07-11 02:44:25.818774] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:38:38.014 [2024-07-11 02:44:28.204679] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:38:38.014 [2024-07-11 02:44:28.204750] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:38:38.014 [2024-07-11 02:44:28.204769] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:38.014 [2024-07-11 02:44:28.212700] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:38:38.014 [2024-07-11 02:44:28.212725] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:38:38.014 [2024-07-11 02:44:28.212738] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:38.014 [2024-07-11 02:44:28.220718] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:38:38.014 [2024-07-11 02:44:28.220737] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:38:38.014 [2024-07-11 02:44:28.220749] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:38.014 [2024-07-11 02:44:28.228738] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:38:38.014 [2024-07-11 02:44:28.228755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:38:38.014 [2024-07-11 02:44:28.228770] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:38:38.014 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:38.273 1+0 records in 00:38:38.273 1+0 records out 00:38:38.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296164 s, 13.8 MB/s 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:38:38.273 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:38.533 1+0 records in 00:38:38.533 1+0 records out 00:38:38.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331137 s, 12.4 MB/s 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:38:38.533 02:44:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:38.793 1+0 records in 00:38:38.793 1+0 records out 00:38:38.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343673 s, 11.9 MB/s 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:38:38.793 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:39.053 1+0 records in 00:38:39.053 1+0 records out 00:38:39.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378488 s, 10.8 MB/s 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:38:39.053 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:39.311 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd0", 00:38:39.311 "bdev_name": "crypto_ram" 00:38:39.311 }, 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd1", 00:38:39.311 "bdev_name": "crypto_ram2" 00:38:39.311 }, 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd2", 00:38:39.311 "bdev_name": "crypto_ram3" 00:38:39.311 }, 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd3", 00:38:39.311 "bdev_name": "crypto_ram4" 00:38:39.311 } 00:38:39.311 ]' 00:38:39.311 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:38:39.311 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:38:39.311 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd0", 00:38:39.311 "bdev_name": "crypto_ram" 00:38:39.311 }, 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd1", 00:38:39.311 "bdev_name": "crypto_ram2" 00:38:39.311 }, 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd2", 00:38:39.311 "bdev_name": "crypto_ram3" 00:38:39.311 }, 00:38:39.311 { 00:38:39.311 "nbd_device": "/dev/nbd3", 00:38:39.311 "bdev_name": "crypto_ram4" 00:38:39.311 } 00:38:39.311 ]' 00:38:39.570 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:38:39.570 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:39.570 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:38:39.570 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:39.570 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:39.570 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:39.570 02:44:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:39.829 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:40.088 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:40.347 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:40.606 02:44:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:38:40.606 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:38:40.865 /dev/nbd0 00:38:40.865 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:40.865 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:40.865 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:40.865 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:40.865 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:40.865 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:41.124 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:41.124 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:41.124 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:41.124 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:41.124 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:41.124 1+0 records in 00:38:41.124 1+0 records out 00:38:41.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306032 s, 13.4 MB/s 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:38:41.125 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:38:41.384 /dev/nbd1 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:41.384 1+0 records in 00:38:41.384 1+0 records out 00:38:41.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311978 s, 13.1 MB/s 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:38:41.384 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:38:41.644 /dev/nbd10 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:41.644 1+0 records in 00:38:41.644 1+0 records out 00:38:41.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031205 s, 13.1 MB/s 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:38:41.644 02:44:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:38:41.904 /dev/nbd11 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:41.904 1+0 records in 00:38:41.904 1+0 records out 00:38:41.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373117 s, 11.0 MB/s 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:41.904 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd0", 00:38:42.162 "bdev_name": "crypto_ram" 00:38:42.162 }, 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd1", 00:38:42.162 "bdev_name": "crypto_ram2" 00:38:42.162 }, 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd10", 00:38:42.162 "bdev_name": "crypto_ram3" 00:38:42.162 }, 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd11", 00:38:42.162 "bdev_name": "crypto_ram4" 00:38:42.162 } 00:38:42.162 ]' 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd0", 00:38:42.162 "bdev_name": "crypto_ram" 00:38:42.162 }, 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd1", 00:38:42.162 "bdev_name": "crypto_ram2" 00:38:42.162 }, 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd10", 00:38:42.162 "bdev_name": "crypto_ram3" 00:38:42.162 }, 00:38:42.162 { 00:38:42.162 "nbd_device": "/dev/nbd11", 00:38:42.162 "bdev_name": "crypto_ram4" 00:38:42.162 } 00:38:42.162 ]' 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:38:42.162 /dev/nbd1 00:38:42.162 /dev/nbd10 00:38:42.162 /dev/nbd11' 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:38:42.162 /dev/nbd1 00:38:42.162 /dev/nbd10 00:38:42.162 /dev/nbd11' 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:38:42.162 256+0 records in 00:38:42.162 256+0 records out 00:38:42.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102507 s, 102 MB/s 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:38:42.162 256+0 records in 00:38:42.162 256+0 records out 00:38:42.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0579312 s, 18.1 MB/s 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:38:42.162 256+0 records in 00:38:42.162 256+0 records out 00:38:42.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0649837 s, 16.1 MB/s 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:42.162 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:38:42.420 256+0 records in 00:38:42.420 256+0 records out 00:38:42.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.059914 s, 17.5 MB/s 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:38:42.420 256+0 records in 00:38:42.420 256+0 records out 00:38:42.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0566738 s, 18.5 MB/s 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:42.420 02:44:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:42.679 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:42.938 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:43.197 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:43.456 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:38:43.715 02:44:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:38:43.974 malloc_lvol_verify 00:38:43.974 02:44:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:38:44.233 d9cc2c7f-30af-44cf-a03f-406ac473621c 00:38:44.233 02:44:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:38:44.493 930b5997-1081-48ca-b25c-d0d436bd46a8 00:38:44.493 02:44:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:38:44.752 /dev/nbd0 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:38:44.752 mke2fs 1.46.5 (30-Dec-2021) 00:38:44.752 Discarding device blocks: 0/4096 done 00:38:44.752 Creating filesystem with 4096 1k blocks and 1024 inodes 00:38:44.752 00:38:44.752 Allocating group tables: 0/1 done 00:38:44.752 Writing inode tables: 0/1 done 00:38:44.752 Creating journal (1024 blocks): done 00:38:44.752 Writing superblocks and filesystem accounting information: 0/1 done 00:38:44.752 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:44.752 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2109613 00:38:45.011 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2109613 ']' 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2109613 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2109613 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2109613' 00:38:45.012 killing process with pid 2109613 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2109613 00:38:45.012 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2109613 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:38:45.581 00:38:45.581 real 0m10.280s 00:38:45.581 user 0m13.196s 00:38:45.581 sys 0m4.203s 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:45.581 ************************************ 00:38:45.581 END TEST bdev_nbd 00:38:45.581 ************************************ 00:38:45.581 02:44:35 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:38:45.581 02:44:35 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:38:45.581 02:44:35 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:38:45.581 02:44:35 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:45.581 02:44:35 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:38:45.581 ************************************ 00:38:45.581 START TEST bdev_fio 00:38:45.581 ************************************ 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:38:45.581 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:38:45.581 ************************************ 00:38:45.581 START TEST bdev_fio_rw_verify 00:38:45.581 ************************************ 00:38:45.581 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:45.582 02:44:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:46.151 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:46.151 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:46.151 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:46.151 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:46.151 fio-3.35 00:38:46.151 Starting 4 threads 00:39:01.037 00:39:01.037 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2111646: Thu Jul 11 02:44:49 2024 00:39:01.037 read: IOPS=21.5k, BW=84.2MiB/s (88.3MB/s)(842MiB/10001msec) 00:39:01.037 slat (usec): min=11, max=385, avg=65.45, stdev=39.17 00:39:01.037 clat (usec): min=10, max=1521, avg=332.31, stdev=230.18 00:39:01.037 lat (usec): min=33, max=1686, avg=397.76, stdev=253.51 00:39:01.037 clat percentiles (usec): 00:39:01.037 | 50.000th=[ 277], 99.000th=[ 1156], 99.900th=[ 1336], 99.990th=[ 1434], 00:39:01.037 | 99.999th=[ 1500] 00:39:01.037 write: IOPS=23.7k, BW=92.5MiB/s (97.0MB/s)(901MiB/9735msec); 0 zone resets 00:39:01.037 slat (usec): min=15, max=1614, avg=76.29, stdev=39.77 00:39:01.037 clat (usec): min=22, max=2373, avg=399.50, stdev=266.98 00:39:01.037 lat (usec): min=59, max=2430, avg=475.79, stdev=290.78 00:39:01.037 clat percentiles (usec): 00:39:01.037 | 50.000th=[ 347], 99.000th=[ 1434], 99.900th=[ 1663], 99.990th=[ 1778], 00:39:01.037 | 99.999th=[ 2180] 00:39:01.037 bw ( KiB/s): min=80600, max=127752, per=97.75%, avg=92598.74, stdev=2963.42, samples=76 00:39:01.037 iops : min=20150, max=31938, avg=23149.68, stdev=740.86, samples=76 00:39:01.037 lat (usec) : 20=0.01%, 50=0.59%, 100=7.81%, 250=29.47%, 500=39.52% 00:39:01.037 lat (usec) : 750=15.09%, 1000=4.36% 00:39:01.037 lat (msec) : 2=3.17%, 4=0.01% 00:39:01.037 cpu : usr=99.60%, sys=0.01%, ctx=90, majf=0, minf=370 00:39:01.037 IO depths : 1=10.1%, 2=25.4%, 4=51.2%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.037 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.037 issued rwts: total=215502,230546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.037 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:01.037 00:39:01.037 Run status group 0 (all jobs): 00:39:01.037 READ: bw=84.2MiB/s (88.3MB/s), 84.2MiB/s-84.2MiB/s (88.3MB/s-88.3MB/s), io=842MiB (883MB), run=10001-10001msec 00:39:01.037 WRITE: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=901MiB (944MB), run=9735-9735msec 00:39:01.037 00:39:01.037 real 0m13.782s 00:39:01.037 user 0m45.991s 00:39:01.037 sys 0m0.671s 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:39:01.037 ************************************ 00:39:01.037 END TEST bdev_fio_rw_verify 00:39:01.037 ************************************ 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:39:01.037 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a6cf9ec7-0ad2-571a-8916-75dca1a91aa9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a6cf9ec7-0ad2-571a-8916-75dca1a91aa9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "12136293-491b-51a6-b42a-daf56914b3f8"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "12136293-491b-51a6-b42a-daf56914b3f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "54f561d3-9bcd-53b8-a6e6-1d029752f41f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "54f561d3-9bcd-53b8-a6e6-1d029752f41f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ac8f4201-15be-5286-b805-e913be13ec64"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ac8f4201-15be-5286-b805-e913be13ec64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:39:01.038 crypto_ram2 00:39:01.038 crypto_ram3 00:39:01.038 crypto_ram4 ]] 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "a6cf9ec7-0ad2-571a-8916-75dca1a91aa9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a6cf9ec7-0ad2-571a-8916-75dca1a91aa9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "12136293-491b-51a6-b42a-daf56914b3f8"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "12136293-491b-51a6-b42a-daf56914b3f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "54f561d3-9bcd-53b8-a6e6-1d029752f41f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "54f561d3-9bcd-53b8-a6e6-1d029752f41f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ac8f4201-15be-5286-b805-e913be13ec64"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ac8f4201-15be-5286-b805-e913be13ec64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:01.038 ************************************ 00:39:01.038 START TEST bdev_fio_trim 00:39:01.038 ************************************ 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:01.038 02:44:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:01.038 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:01.039 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:01.039 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:01.039 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:01.039 fio-3.35 00:39:01.039 Starting 4 threads 00:39:13.246 00:39:13.246 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2113502: Thu Jul 11 02:45:03 2024 00:39:13.246 write: IOPS=27.8k, BW=109MiB/s (114MB/s)(1087MiB/10001msec); 0 zone resets 00:39:13.246 slat (usec): min=18, max=1748, avg=78.64, stdev=47.04 00:39:13.246 clat (usec): min=83, max=2534, avg=663.55, stdev=199.33 00:39:13.246 lat (usec): min=114, max=2694, avg=742.19, stdev=211.11 00:39:13.246 clat percentiles (usec): 00:39:13.246 | 50.000th=[ 652], 99.000th=[ 1237], 99.900th=[ 1369], 99.990th=[ 1516], 00:39:13.246 | 99.999th=[ 1926] 00:39:13.246 bw ( KiB/s): min=94168, max=159786, per=100.00%, avg=111639.68, stdev=3925.37, samples=76 00:39:13.246 iops : min=23542, max=39946, avg=27909.89, stdev=981.32, samples=76 00:39:13.246 trim: IOPS=27.8k, BW=109MiB/s (114MB/s)(1087MiB/10001msec); 0 zone resets 00:39:13.246 slat (usec): min=6, max=413, avg=21.64, stdev=10.21 00:39:13.246 clat (usec): min=24, max=2265, avg=209.56, stdev=211.96 00:39:13.246 lat (usec): min=34, max=2304, avg=231.20, stdev=218.35 00:39:13.246 clat percentiles (usec): 00:39:13.246 | 50.000th=[ 103], 99.000th=[ 832], 99.900th=[ 898], 99.990th=[ 938], 00:39:13.246 | 99.999th=[ 1893] 00:39:13.246 bw ( KiB/s): min=94176, max=159907, per=100.00%, avg=111646.47, stdev=3929.90, samples=76 00:39:13.246 iops : min=23544, max=39976, avg=27911.58, stdev=982.44, samples=76 00:39:13.246 lat (usec) : 50=0.44%, 100=23.37%, 250=14.17%, 500=13.66%, 750=32.72% 00:39:13.246 lat (usec) : 1000=12.33% 00:39:13.246 lat (msec) : 2=3.31%, 4=0.01% 00:39:13.246 cpu : usr=99.56%, sys=0.00%, ctx=106, majf=0, minf=153 00:39:13.246 IO depths : 1=0.1%, 2=9.4%, 4=52.4%, 8=38.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:13.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.246 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.246 issued rwts: total=0,278346,278347,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.246 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:13.246 00:39:13.246 Run status group 0 (all jobs): 00:39:13.246 WRITE: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1087MiB (1140MB), run=10001-10001msec 00:39:13.246 TRIM: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1087MiB (1140MB), run=10001-10001msec 00:39:13.246 00:39:13.246 real 0m13.753s 00:39:13.246 user 0m46.017s 00:39:13.246 sys 0m0.685s 00:39:13.246 02:45:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:13.246 02:45:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:39:13.246 ************************************ 00:39:13.246 END TEST bdev_fio_trim 00:39:13.246 ************************************ 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:39:13.509 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:39:13.509 00:39:13.509 real 0m27.902s 00:39:13.509 user 1m32.191s 00:39:13.509 sys 0m1.564s 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:13.509 ************************************ 00:39:13.509 END TEST bdev_fio 00:39:13.509 ************************************ 00:39:13.509 02:45:03 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:39:13.509 02:45:03 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:13.509 02:45:03 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:13.509 02:45:03 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:39:13.509 02:45:03 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:13.509 02:45:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:13.509 ************************************ 00:39:13.509 START TEST bdev_verify 00:39:13.509 ************************************ 00:39:13.509 02:45:03 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:13.509 [2024-07-11 02:45:03.887171] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:13.509 [2024-07-11 02:45:03.887302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115044 ] 00:39:13.768 [2024-07-11 02:45:04.100847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:13.768 [2024-07-11 02:45:04.157789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.768 [2024-07-11 02:45:04.157794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.768 [2024-07-11 02:45:04.179163] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:13.768 [2024-07-11 02:45:04.187193] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:14.027 [2024-07-11 02:45:04.195210] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:14.027 [2024-07-11 02:45:04.298627] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:16.563 [2024-07-11 02:45:06.680837] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:16.563 [2024-07-11 02:45:06.680922] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:16.563 [2024-07-11 02:45:06.680937] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:16.563 [2024-07-11 02:45:06.688855] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:16.563 [2024-07-11 02:45:06.688875] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:16.563 [2024-07-11 02:45:06.688886] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:16.563 [2024-07-11 02:45:06.696880] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:16.563 [2024-07-11 02:45:06.696898] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:16.563 [2024-07-11 02:45:06.696909] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:16.563 [2024-07-11 02:45:06.704903] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:16.563 [2024-07-11 02:45:06.704920] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:16.563 [2024-07-11 02:45:06.704931] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:16.563 Running I/O for 5 seconds... 00:39:21.839 00:39:21.839 Latency(us) 00:39:21.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:21.839 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x0 length 0x1000 00:39:21.839 crypto_ram : 5.07 937.85 3.66 0.00 0.00 135400.37 869.06 92092.33 00:39:21.839 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x1000 length 0x1000 00:39:21.839 crypto_ram : 5.05 759.67 2.97 0.00 0.00 167510.34 16640.45 108504.82 00:39:21.839 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x0 length 0x1000 00:39:21.839 crypto_ram2 : 5.07 941.35 3.68 0.00 0.00 134701.95 2393.49 91636.42 00:39:21.839 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x1000 length 0x1000 00:39:21.839 crypto_ram2 : 5.07 767.08 3.00 0.00 0.00 165537.24 1146.88 108504.82 00:39:21.839 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x0 length 0x1000 00:39:21.839 crypto_ram3 : 5.06 3010.26 11.76 0.00 0.00 42178.51 1602.78 34192.70 00:39:21.839 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x1000 length 0x1000 00:39:21.839 crypto_ram3 : 5.06 2429.25 9.49 0.00 0.00 52148.66 4359.57 41715.09 00:39:21.839 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x0 length 0x1000 00:39:21.839 crypto_ram4 : 5.06 3007.91 11.75 0.00 0.00 42091.28 5328.36 38067.87 00:39:21.839 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:21.839 Verification LBA range: start 0x1000 length 0x1000 00:39:21.839 crypto_ram4 : 5.06 2428.64 9.49 0.00 0.00 52023.00 4986.43 44906.41 00:39:21.839 =================================================================================================================== 00:39:21.839 Total : 14282.02 55.79 0.00 0.00 71052.15 869.06 108504.82 00:39:21.839 00:39:21.839 real 0m8.449s 00:39:21.839 user 0m15.723s 00:39:21.839 sys 0m0.618s 00:39:21.839 02:45:12 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:21.839 02:45:12 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:39:21.839 ************************************ 00:39:21.839 END TEST bdev_verify 00:39:21.839 ************************************ 00:39:22.099 02:45:12 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:39:22.099 02:45:12 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:22.100 02:45:12 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:39:22.100 02:45:12 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:22.100 02:45:12 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:22.100 ************************************ 00:39:22.100 START TEST bdev_verify_big_io 00:39:22.100 ************************************ 00:39:22.100 02:45:12 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:22.100 [2024-07-11 02:45:12.376473] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:22.100 [2024-07-11 02:45:12.376533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116501 ] 00:39:22.100 [2024-07-11 02:45:12.511144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:22.359 [2024-07-11 02:45:12.560313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.359 [2024-07-11 02:45:12.560318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.359 [2024-07-11 02:45:12.581767] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:22.359 [2024-07-11 02:45:12.589798] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:22.359 [2024-07-11 02:45:12.597817] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:22.359 [2024-07-11 02:45:12.705693] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:24.894 [2024-07-11 02:45:15.077613] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:24.894 [2024-07-11 02:45:15.077699] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:24.894 [2024-07-11 02:45:15.077713] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:24.894 [2024-07-11 02:45:15.085628] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:24.894 [2024-07-11 02:45:15.085647] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:24.894 [2024-07-11 02:45:15.085659] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:24.894 [2024-07-11 02:45:15.093649] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:24.894 [2024-07-11 02:45:15.093666] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:24.894 [2024-07-11 02:45:15.093678] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:24.894 [2024-07-11 02:45:15.101673] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:24.894 [2024-07-11 02:45:15.101691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:24.894 [2024-07-11 02:45:15.101702] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:24.894 Running I/O for 5 seconds... 00:39:25.886 [2024-07-11 02:45:16.083246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.083792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.084955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.085034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.086991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.087082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.087149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.087203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.087720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.087742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.087929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.088000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.088052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.088103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.089940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.090878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.092884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.092963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.093770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.095693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.095767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.095832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.095887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.096202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.096223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.096392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.096448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.096499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.096556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.098167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.098231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.098282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.098333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.098754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.098782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.098953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.099008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.099061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.099113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.100715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.100786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.100838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.100889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.101203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.101224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.101391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.101447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.101514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.101570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.103960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.104011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.104062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.886 [2024-07-11 02:45:16.105519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.105584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.105635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.105686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.106135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.106160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.106333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.106391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.106442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.106495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.107975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.108995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.110383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.110446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.110498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.110556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.111000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.111021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.111190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.111247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.111298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.111356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.112802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.112867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.112919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.112971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.113437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.113459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.113631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.113692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.113743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.113804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.115971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.116023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.116080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.117701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.117775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.117828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.117886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.118349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.118369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.118536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.118590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.118641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.118692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.120958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.122496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.122560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.122612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.122663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.123028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.123049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.123219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.123274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.123333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.123385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.124776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.124844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.124895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.124946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.125306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.125326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.125496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.125550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.125602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.125669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.127435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.127501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.127554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.127606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.127930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.127951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.128122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.128184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.128239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.887 [2024-07-11 02:45:16.128290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.129751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.129825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.129876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.129927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.130318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.130339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.130507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.130570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.130624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.130675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.132446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.132511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.132563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.132614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.132942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.132964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.133136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.133196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.133247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.133301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.134844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.134907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.134958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.135010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.135350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.135370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.135541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.135596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.135648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.135702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.137348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.137412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.137469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.137519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.137876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.137898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.138066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.138121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.138172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.138223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.139677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.139739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.139806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.139863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.140178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.140209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.140378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.140434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.140485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.140535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.142301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.142367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.142419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.142470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.142871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.142893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.143068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.143123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.143182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.143237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.144601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.144669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.144720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.144779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.145095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.145116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.145286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.145346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.145410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.145466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.147427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.147490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.147541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.147592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.147957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.147978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.148162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.148220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.148272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.148323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.149796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.149859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.149910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.149961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.150277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.150297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.150473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.150532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.150583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.150633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.152606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.152668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.152720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.152785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.153102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.153122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.153295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.153350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.153401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.153453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.888 [2024-07-11 02:45:16.155008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.155850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.157439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.157508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.157566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.157617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.157940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.157961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.158130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.158184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.158234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.158289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.159824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.159887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.159947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.159997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.160311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.160330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.160700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.160766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.160819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.160849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.160869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.162922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.162993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.163044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.163095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.163412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.163432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.164671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.166577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.168597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.170631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.174241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.176254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.178268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.179251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.179579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.179600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.181628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.183647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.184805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.185297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.188568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.189564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.191336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.193382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.193702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.193722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.194327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.194830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.196700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.198720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.202007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.204047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.205337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.205834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.206188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.206208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.208088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.210143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.211312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.212211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.214503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.215005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.216708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.218514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.218840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.218861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.220645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.222450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.224386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.226410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.230092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.232130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.234158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.235150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.235470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.235491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.237622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.239644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.240443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.240939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.244117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.245790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.247579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.249576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.249907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.249929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.250525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.251563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.889 [2024-07-11 02:45:16.253341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.255371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.258859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.260888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.261444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.261937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.262254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.262275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.264386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.266405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.267622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.269532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.271346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.272621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.274418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.276439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.276768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.276788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.278312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.280091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.282110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.283980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.287669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.289696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.290946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.292922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.293242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.293262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.295407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.296490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.296988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.298577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.301258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.303040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:25.890 [2024-07-11 02:45:16.305043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.306959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.307484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.307506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.308140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.309930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.311936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.313960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.317458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.318484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.318985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.320640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.321006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.321027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.323119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.324791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.326590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.328505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.330633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.332435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.334464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.336501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.336945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.336966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.338868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.340899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.342929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.343424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.346974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.348545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.350438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.352421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.352741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.352767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.354361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.354880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.356054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.357821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.361178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.362970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.363463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.363967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.364507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.364528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.365146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.365637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.366131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.366625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.368650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.369165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.369661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.370159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.152 [2024-07-11 02:45:16.370688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.370710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.371321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.371836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.372335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.372832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.375291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.375799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.376290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.376796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.377291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.377312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.377918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.378419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.378922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.379411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.381626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.382142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.382635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.383131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.383649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.383670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.384272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.384772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.385263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.385766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.387786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.388287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.388789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.389279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.389772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.389793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.390388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.390895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.391384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.391886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.394411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.394921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.395415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.395915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.396406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.396427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.397032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.397535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.398044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.398534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.400774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.401279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.401775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.402293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.402841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.402862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.403459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.403956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.404448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.404947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.406983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.407486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.407986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.408477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.408984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.409005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.409619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.410127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.410615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.411111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.413502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.414011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.414502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.415001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.415484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.415505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.416112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.416626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.417121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.417610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.420660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.422592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.423099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.423664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.423989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.424010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.426147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.428177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.429274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.431289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.433310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.435324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.437343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.439372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.439872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.439893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.442025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.444055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.446094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.446589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.450063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.451581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.153 [2024-07-11 02:45:16.453534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.455576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.455900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.455921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.457222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.457711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.459265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.461039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.464228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.466265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.468251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.468741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.469294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.469314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.471199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.473237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.475254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.476259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.478441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.478947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.480898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.482884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.483203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.483223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.484789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.486775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.488785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.490810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.494606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.496641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.498672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.499680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.500042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.500063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.502186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.504213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.504712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.505207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.507771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.509633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.511642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.513668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.514136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.514157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.514756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.516388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.518161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.520146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.523604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.525572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.526069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.526868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.527249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.527270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.529419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.531449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.532451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.534227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.536192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.538189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.540177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.542198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.542571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.542592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.544702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.546718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.546774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.548804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.548861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.549326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.552261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.554283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.556284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.557805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.558200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.558220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.558389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.558464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.558520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.558571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.558893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.560353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.560416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.560468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.560519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.560841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.560862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.561034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.561088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.561139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.154 [2024-07-11 02:45:16.561195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.561508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.562726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.562794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.562845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.562902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.563218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.563239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.563409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.563464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.563515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.563566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.563889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.565971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.566022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.566073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.566388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.567513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.567582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.567644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.567700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.568023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.568044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.568221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.568276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.568327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.568385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.568698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.570945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.571017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.571069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.571382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.572450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.572516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.572567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.572618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.572938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.572960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.573129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.573191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.573249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.155 [2024-07-11 02:45:16.573300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.573659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.575376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.575437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.575489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.575540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.575897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.575918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.576091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.576155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.576210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.576261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.576575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.577736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.577804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.577856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.577907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.578222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.578242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.578412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.578471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.578522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.578572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.416 [2024-07-11 02:45:16.578988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.580564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.580625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.580676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.580727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.581081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.581102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.581273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.581328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.581379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.581430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.581744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.582960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.583798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.584276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.585546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.585608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.585666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.585719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.586042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.586063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.586232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.586287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.586342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.586393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.586707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.587965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.588804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.589301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.590424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.590492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.590549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.590599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.590922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.590943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.591111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.591166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.591217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.591275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.591590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.592789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.592858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.592915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.592965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.593279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.593299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.593470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.593524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.593575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.593634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.594158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.595322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.595388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.595439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.595491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.595811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.595832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.596000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.596055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.596126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.596178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.596528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.597651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.597715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.597774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.597826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.598139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.598159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.598329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.598391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.598445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.598497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.598982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.600372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.600433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.600484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.600535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.600857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.600878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.601052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.601111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.601163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.601213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.601624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.602674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.602735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.602793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.602845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.603160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.603185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.603358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.603414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.603466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.603517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.604001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.605377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.605440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.605491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.605541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.605863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.605884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.606053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.606109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.606165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.606216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.606639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.607687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.607748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.607806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.607856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.608170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.608190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.608360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.608416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.608468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.608535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.609060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.610997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.611048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.611099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.611508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.612555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.612617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.612668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.612728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.613050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.613071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.613242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.613298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.613369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.613422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.613869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.615015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.615075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.615135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.615191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.417 [2024-07-11 02:45:16.615505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.615525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.615693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.615748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.615811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.615862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.616288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.617364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.617426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.617486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.617540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.617873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.617894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.618066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.618121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.618173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.618224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.618652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.619765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.619836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.619887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.619938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.620253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.620274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.620445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.620499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.620560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.620611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.620997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.622973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.623029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.623496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.624569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.624640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.624692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.624743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.625065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.625085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.625259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.625317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.625374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.625424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.625739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.626871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.626933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.626984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.627034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.627516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.627536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.627703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.627766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.627819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.627871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.628380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.629462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.629523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.629575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.629625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.629948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.629969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.630139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.630200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.630252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.630302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.630617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.631656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.631718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.631777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.631828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.632271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.632292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.632464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.632520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.632582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.632634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.633084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.634126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.634187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.634248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.634304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.634619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.634640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.634994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.636770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.636830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.638841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.639161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.640646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.640709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.640768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.640819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.641188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.641209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.643254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.645237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.646656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.648431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.648749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.650264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.652247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.654235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.656265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.656635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.656657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.658371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.660212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.662294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.663865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.664390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.667625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.669641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.670946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.672930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.673247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.673267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.675410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.676611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.677105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.678577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.678966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.682003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.682503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.683026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.683531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.684006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.684027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.684625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.685134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.685631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.686127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.686651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.688654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.689165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.689657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.690153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.690633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.690654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.691256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.691776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.692277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.692776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.693312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.695024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.695526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.696028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.696538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.696966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.418 [2024-07-11 02:45:16.696988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.697584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.698086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.698579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.699077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.699575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.701379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.701887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.702380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.702878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.703349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.703370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.703974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.704471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.704967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.705457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.705907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.707725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.708235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.708729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.709223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.709689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.709710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.710309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.710816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.711311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.711805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.712224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.713954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.714452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.714946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.715439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.715920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.715941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.716534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.717033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.717525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.718021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.718454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.720290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.720794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.721285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.721783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.722344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.722366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.722968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.723460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.723958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.724451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.724965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.726817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.727319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.727838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.728340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.728835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.728857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.729451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.729952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.730447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.730951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.731463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.733533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.734047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.734539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.735032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.735523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.735549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.736170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.736678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.737176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.737664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.738183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.740021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.740519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.741013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.741509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.742016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.742037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.742631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.743133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.743621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.744113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.744522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.746342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.746847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.747339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.747836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.748318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.748339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.748939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.749432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.749938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.750433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.750899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.753540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.755566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.756249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.757983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.758300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.758320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.759871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.760382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.761957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.763738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.764060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.766931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.769021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.770560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.771056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.771512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.771532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.773390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.775238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.776626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.778065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.778434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.780045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.781897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.783923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.785948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.786330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.786351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.788473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.790505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.792506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.793370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.793853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.796957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.798742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.799913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.801791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.802111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.802133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.804278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.804788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.805282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.807266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.807582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.810533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.812568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.814593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.815648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.816124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.816145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.817604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.819383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.419 [2024-07-11 02:45:16.821403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.823325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.823718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.826757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.827266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.827840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.829607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.829928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.829949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.832092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.833126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.420 [2024-07-11 02:45:16.834891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.836909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.837226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.839983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.841742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.843731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.845493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.845838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.845859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.847728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.849720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.851404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.851900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.852385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.855381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.856382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.858157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.860176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.860493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.860513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.861316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.861820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.680 [2024-07-11 02:45:16.863808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.865792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.866109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.868870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.870935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.872508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.873005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.873480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.873501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.875380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.877411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.879438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.880547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.880930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.882428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.882940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.884939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.886969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.887287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.887307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.888582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.890377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.892363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.894391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.894861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.898181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.900217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.902248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.903306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.903664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.903684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.905828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.907862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.908363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.908862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.909178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.911437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.913373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.915398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.917424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.917857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.917879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.918475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.919925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.921697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.923702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.924027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.927187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.929223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.929723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.930225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.930543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.930563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.932699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.934730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.935704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.937462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.937785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.939302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.940792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.942568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.944593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.944915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.944937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.946606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.948404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.950403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.952041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.952517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.955752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.957794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.958872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.960669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.960993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.961014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.963156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.964024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.964515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.966427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.966785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.969436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.971302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.973382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.974950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.975484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.975507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.976593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.978376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.980389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.982422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.982876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.681 [2024-07-11 02:45:16.985893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.986467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.986964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.988917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.989235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.989256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.991394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.992503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.994348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.996370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.996692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:16.999180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.000973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.003002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.004988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.005443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.005463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.007346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.007410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.009427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.009487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.009810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.013093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.015124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.017151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.018163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.018501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.018522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.018693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.018748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.018807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.018858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.019174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.020342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.020404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.020458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.020509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.020872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.020892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.021063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.021117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.021181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.021235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.021549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.022577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.022644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.022695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.022747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.023104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.023124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.023296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.023351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.023403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.023459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.023783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.024979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.025826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.026140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.027246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.027308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.027359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.027410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.027782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.027804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.027975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.028030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.028088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.028142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.028455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.029835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.029896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.029949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.030989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.682 [2024-07-11 02:45:17.032978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.033033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.033085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.033397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.034873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.034935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.034987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.035043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.035359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.035379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.035551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.035605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.035660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.035711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.036033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.037243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.037304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.037356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.037407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.037811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.037832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.038001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.038056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.038107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.038158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.038471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.039785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.039847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.039909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.039967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.040279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.040300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.040473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.040527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.040579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.040629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.040949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.042988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.043301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.044593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.044656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.044712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.044770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.045153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.045173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.045338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.045393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.045448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.045499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.045819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.046940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.047788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.048103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.049419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.049485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.049537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.049588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.049955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.049976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.050144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.050198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.050249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.050316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.050628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.051668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.051732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.051794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.051846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.052161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.052182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.052350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.052403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.052454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.052518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.052837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.054363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.054425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.054476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.054526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.054909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.683 [2024-07-11 02:45:17.054930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.055097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.055156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.055220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.055276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.055589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.056771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.056837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.056888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.056939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.057314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.057334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.057506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.057562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.057614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.057665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.058100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.059994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.060059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.060111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.060518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.061740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.061808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.061861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.061913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.062412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.062433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.062604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.062659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.062710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.062769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.063117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.066906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.067419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.070907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.070981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.071960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.072466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.075787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.075850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.075920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.075974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.076432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.076453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.076626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.076683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.076735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.076813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.077321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.080585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.080648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.080700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.080773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.081284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.081305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.081476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.081531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.081582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.081633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.082023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.085175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.085237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.085290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.085341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.085874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.085899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.086073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.086131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.086211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.086275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.086734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.090063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.090126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.090178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.684 [2024-07-11 02:45:17.090230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.090610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.090631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.090814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.090871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.090958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.091011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.091434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.094741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.094824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.094888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.094956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.095442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.095462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.095637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.095693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.095746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.095803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.096334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.099643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.099706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.099766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.099819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.100241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.100261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.100433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.100488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.100544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.100596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.685 [2024-07-11 02:45:17.101051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.104150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.104225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.104303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.104369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.104876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.104897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.105069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.105130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.105199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.105252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.105666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.109970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.110022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.110438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.113575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.113637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.113689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.113743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.114225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.114247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.114426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.114495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.114565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.114633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.115123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.118351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.118431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.118486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.118538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.118948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.118970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.119140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.119196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.119248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.119298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.119746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.122953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.123016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.123068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.123120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.123505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.123526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.123895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.124399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.124459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.124973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.125486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.127078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.127142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.127207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.127281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.127598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.127619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.128136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.130001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.130514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.131013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.131331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.133750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.135590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.137607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.139644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.140083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.140104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.140641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.142100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.143880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.145912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.146229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.149299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.151296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.151800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.946 [2024-07-11 02:45:17.152295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.152666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.152686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.154796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.155297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.157281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.159107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.159600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.162856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.164900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.166192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.167980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.168297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.168318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.170459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.170966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.171460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.173209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.173565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.176784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.178683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.180408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.180906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.181403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.181425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.183542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.185565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.187572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.188730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.189055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.190543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.191055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.193038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.195054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.195374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.195394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.196511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.198518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.200524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.202557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.203045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.206088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.208126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.209730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.210571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.210901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.210922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.213056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.215048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.215546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.216043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.216361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.218527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.220322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.222329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.224355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.224819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.224840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.225438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.226906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.228678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.230700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.231024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.234184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.236174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.236672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.237228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.237550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.237571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.239700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.241730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.242732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.244514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.244837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.246494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.248151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.249980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.252027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.252346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.252368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.254245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.256178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.258215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.259710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.260248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.263705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.265744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.266745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.268510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.268833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.268854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.270996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.271622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.272123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.273988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.274369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.277107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.278998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.947 [2024-07-11 02:45:17.281042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.282599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.283136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.283158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.284031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.285828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.287848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.289867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.290329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.293341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.293958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.294454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.296415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.296754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.296779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.298898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.300329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.302316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.304293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.304609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.306507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.308292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.310301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.312324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.312753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.312795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.314682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.316696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.318726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.319227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.319636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.322694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.324353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.326207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.328157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.328477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.328498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.330103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.330613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.331682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.333471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.333793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.336777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.338813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.340837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.341335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.341823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.341845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.343887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.345879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.347906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.349207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.349525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.351877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.352391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.353719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.355487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.355814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.355836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.357921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.359395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.361172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.363172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.363493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:26.948 [2024-07-11 02:45:17.366796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.211 [2024-07-11 02:45:17.368840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.370866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.372106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.372423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.372443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.374563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.376593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.377711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.378208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.378559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.381643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.382944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.384731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.386746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.387071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.387093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.387690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.388215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.390022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.392031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.392351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.395448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.397488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.398590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.399088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.399440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.399461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.401237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.403272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.405267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.406736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.407129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.408565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.409078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.410927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.412950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.413268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.413288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.414420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.416211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.418245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.420275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.420772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.423853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.425886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.427891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.429031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.429406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.429427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.431566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.433597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.434103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.434597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.434922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.437274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.439273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.441298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.443330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.443718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.443742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.444349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.445600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.447361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.449373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.449692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.452894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.454932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.455429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.455930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.456258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.456279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.458453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.460496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.461632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.463454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.463780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.465269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.466443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.468233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.470252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.470572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.470592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.472010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.473796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.475819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.477827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.478286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.481149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.482222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.483901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.484970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.485494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.485514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.486117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.486616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.487118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.487613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.212 [2024-07-11 02:45:17.487939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.489568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.490223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.491736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.493244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.493726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.493747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.494377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.494884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.495380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.495885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.496404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.498530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.500553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.501137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.501635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.502020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.502041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.502642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.503154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.503651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.504151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.504692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.506683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.507205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.507701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.508209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.508678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.508700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.509308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.509816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.510311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.510811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.511300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.513058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.513564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.514069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.514563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.514983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.515006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.515601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.516112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.516613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.517118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.517612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.519234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.519739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.520245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.521737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.522214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.522235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.524214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.524711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.525212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.527123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.527642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.529184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.530965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.531464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.532326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.532658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.532679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.533287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.533350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.533851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.533917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.534375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.536316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.536827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.537322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.537833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.538345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.538366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.538538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.538594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.538648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.538699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.539207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.540453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.540516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.540569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.540620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.541132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.541154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.541325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.541387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.541441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.541499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.542050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.543484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.543563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.213 [2024-07-11 02:45:17.543631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.543688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.544176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.544198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.544370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.544426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.544480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.544532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.544976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.546324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.546389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.546440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.546493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.546972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.546993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.547166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.547222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.547276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.547328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.547844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.549312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.549387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.549452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.549527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.550021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.550048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.550222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.550278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.550330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.550386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.550943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.552436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.552499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.552553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.552605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.553045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.553068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.553241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.553309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.553364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.553433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.553891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.555307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.555370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.555423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.555476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.555912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.555934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.556107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.556163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.556215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.556267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.556707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.558308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.558371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.558429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.558482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.558991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.559013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.559185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.559255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.559316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.559383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.559819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.561298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.561361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.561414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.561466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.561986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.562009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.562178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.562235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.562289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.562343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.562777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.564316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.564381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.564435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.564488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.564986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.565008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.565178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.565235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.565288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.565340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.565777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.567142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.567205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.567258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.567310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.567769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.567790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.567961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.568017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.568070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.568122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.568610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.570021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.570083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.214 [2024-07-11 02:45:17.570137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.570190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.570686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.570708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.570893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.570951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.571007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.571059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.571567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.572826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.572890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.572942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.572993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.573513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.573534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.573705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.573769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.573852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.573905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.574432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.575683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.575747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.575806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.575858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.576173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.576194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.576368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.576424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.576477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.576529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.576850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.577999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.578987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.579042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.579528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.580855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.580917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.580969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.581020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.581335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.581356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.581531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.581586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.581641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.581692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.582199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.583307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.583370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.583454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.583508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.583849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.583871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.584041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.584097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.584149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.584200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.584725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.586995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.587047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.587098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.587522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.588625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.588690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.588742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.588809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.589371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.589392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.589562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.589617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.589670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.589722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.590287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.591617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.591679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.591730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.591801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.592117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.592137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.592307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.592361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.215 [2024-07-11 02:45:17.592417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.592469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.592971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.594994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.595045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.595098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.595604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.597969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.598020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.598332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.599637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.599714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.599774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.599826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.600139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.600160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.600331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.600385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.600445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.600498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.601031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.602677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.602741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.602802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.602855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.603205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.603225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.603395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.603450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.603506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.603567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.603894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.605090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.605153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.605204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.605255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.605769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.605790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.605965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.606020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.606074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.606126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.606589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.607938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.608802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.609130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.610230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.216 [2024-07-11 02:45:17.610293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.610344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.610395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.610710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.610730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.610908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.610970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.611023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.611075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.611391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.612828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.612891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.612943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.612993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.613309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.613330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.613503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.613561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.613617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.613668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.613989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.615983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.616034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.217 [2024-07-11 02:45:17.616569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.656556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.656632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.658422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.668818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.670814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.670885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.672321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.672384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.674129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.674446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.674467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.676315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.678049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.678723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.680220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.683589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.477 [2024-07-11 02:45:17.685087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.686869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.688879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.689198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.689219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.690967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.691716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.693167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.694754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.697627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.699420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.701411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.703140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.703469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.703490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.704211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.705798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.707289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.709067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.712246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.714269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.716178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.717834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.718369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.718391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.720207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.721581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.723367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.725388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.728922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.730924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.732357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.733098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.733414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.733435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.734679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.736471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.738489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.740478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.744035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.745376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.746202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.747977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.748372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.748393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.750292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.752326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.754310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.755680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.757541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.758061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.759794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.761827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.762145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.762165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.763292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.765079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.767108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.769135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.772534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.774527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.776220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.777995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.778370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.778394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.780571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.782092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.782600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.783095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.786829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.788862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.790107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.791889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.792205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.792226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.794302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.794807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.795296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.795787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.799288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.800315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.802089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.804109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.804428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.804449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.805052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.805546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.806055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.806819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.809190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.810995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.813021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.815048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.815545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.815566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.816167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.816662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.817156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.478 [2024-07-11 02:45:17.818930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.822194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.824188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.826208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.827190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.827741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.827766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.828360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.828859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.830698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.832658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.835841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.837917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.839433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.839945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.840459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.840479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.841085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.842459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.844243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.846247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.849791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.851786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.852279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.852775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.853337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.853358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.854293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.856085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.858100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.860116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.863646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.864178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.864668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.865157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.865684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.865706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.867727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.869750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.871782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.872797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.875113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.875612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.876111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.876599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.876921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.876942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.879078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.881107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.882231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.884019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.885906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.886406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.886900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.888787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.889143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.889164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.891306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.892746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.894717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.896700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.898753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.479 [2024-07-11 02:45:17.899258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.900515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.902286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.902605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.902626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.903583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.905339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.907357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.908167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.911340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.912172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.914158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.914660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.915158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.915180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.915782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.916274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.916792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.917285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.920239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.922253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.922879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.924909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.925228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.925249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.925856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.926349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.926842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.927333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.929592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.930104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.932088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.932869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.933216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.740 [2024-07-11 02:45:17.933239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.933843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.934341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.934835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.936473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.938569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.939075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.939564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.940079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.940585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.940605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.941212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.941704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.942197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.944088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.946700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.947202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.947691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.948182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.948599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.948621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.949227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.949724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.950217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.950704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.953042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.953541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.954036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.954531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.955080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.955105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.955703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.956198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.956692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.957189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.959613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.960128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.960630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.961122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.961626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.961651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.962257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.962767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.963259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.963744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.965980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.966479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.966976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.967476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.967959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.967981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.968577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.969076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.969568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.970075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.972500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.973022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.973520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.974015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.974581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.974603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.975209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.975708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.976204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.976692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.978931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.979001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.979490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.979546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.980052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.980079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.980681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.981190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.981679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.982170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.984369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.984437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.984932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.984987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.985405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.985428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.986043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.986109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.986595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.986648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.989084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.989157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.989644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.989698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.990233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.990256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.990426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.990484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.990536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.990587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.993081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.993152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.993642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.993710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.741 [2024-07-11 02:45:17.994217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.994245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.994421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.994478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.994531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.994561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.994580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.996995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.997067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.997557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.997617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.998142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.998166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.998341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.998400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.998464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:17.998517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.000926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.001002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.001488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.001541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.002053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.002076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.002247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.002304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.002370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.002422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.004342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.004415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.004468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.004520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.005043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.005076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.005249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.005307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.005360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.005429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.007373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.007439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.007492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.007551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.008106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.008128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.008298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.008356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.008409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.008462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.010460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.010524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.010579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.010633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.011160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.011184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.011353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.011410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.011463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.011517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.012999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.013945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.014002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.016915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.018523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.018596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.018648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.018699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.019021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.019041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.019213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.019268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.019327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.019380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.021565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.021629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.021682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.021734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.022057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.022078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.022252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.022308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.022358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.022409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.023959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.024028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.024082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.742 [2024-07-11 02:45:18.024137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.024452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.024472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.024642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.024697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.024748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.024817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.026546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.026609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.026660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.026712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.027250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.027272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.027442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.027497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.027548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.027630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.029967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.030018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.032924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.034400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.034466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.034517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.034568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.034888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.034910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.035080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.035143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.035198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.035253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.037828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.037892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.037944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.038001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.038315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.038335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.038507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.038570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.038623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:27.743 [2024-07-11 02:45:18.038675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.248065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.248145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.250156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.261813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.261888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.262473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.262532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.262999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.263485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.263903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.263924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.265804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.267836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.269828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.271227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.275720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.276894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.278667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.280687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.281008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.281029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.282394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.284189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.286220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.288183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.293873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.295864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.297272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.299042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.299357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.299378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.301460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.301963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.302450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.302942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.309815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.311829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.313809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.314304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.314828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.314850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.315441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.317000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.318666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.319520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.323853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.324359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.324853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.325340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.325893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.325915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.327708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.328214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.330006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.330495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.333842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.334345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.334861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.335358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.335776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.335796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.336392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.337028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.339051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.340190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.344890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.345392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.346784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.347532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.347855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.347877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.348473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.349083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.350625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.352032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.355074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.357067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.358738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.360056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.360375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.004 [2024-07-11 02:45:18.360396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.361034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.361539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.362040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.362543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.364808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.365311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.365810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.366301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.366695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.366717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.367321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.367822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.368314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.368814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.371178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.371680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.372179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.372675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.373164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.373185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.373793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.374297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.374794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.375290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.379453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.379968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.380459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.380960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.381365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.381386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.382008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.382505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.383006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.383516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.387542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.387610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.388103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.388165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.388539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.388560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.389162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.389659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.390157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.390648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.394774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.394855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.395343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.395864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.396405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.396426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.397034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.397534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.398044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.398536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.401731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.403700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.405636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.405701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.406023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.406045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.408171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.408678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.409890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.409950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.414291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.416095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.416156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.418231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.418551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.418577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.419187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.419250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.419735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.419795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.424644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.005 [2024-07-11 02:45:18.424720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.426288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.426894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.427414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.427438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.429234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.429298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.429789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.429844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.432594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.434638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.436066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.436121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.436644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.436664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.438214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.438278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.440142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.440209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.444065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.444885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.444944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.446345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.446663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.446686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.448865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.448933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.449693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.449767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.454618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.454687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.456699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.458703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.459209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.459230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.461093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.463118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.465134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.465192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.468726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.469910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.469969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.471717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.472152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.472174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.474227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.474301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.476119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.476180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.479804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.481684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.481743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.483743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.484067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.484087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.485508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.485574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.487361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.487417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.490816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.491494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.491550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.492500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.492864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.492885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.495023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.495088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.497069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.497137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.499950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.500451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.500523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.502369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.502913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.502936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.503539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.503624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.505640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.505701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.508645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.509372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.509432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.268 [2024-07-11 02:45:18.510631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.511150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.511171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.513264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.513338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.515358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.515417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.518097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.520119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.520180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.520668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.521088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.521110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.522020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.522087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.522573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.522640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.525284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.527120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.527182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.528969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.529489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.529511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.530320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.530389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.531557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.531614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.534513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.536511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.536573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.538595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.539007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.539032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.541157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.541231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.541720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.541782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.544943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.546458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.546515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.548528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.548851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.548872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.551021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.551087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.551995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.552052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.555793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.557775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.557834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.559868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.560188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.560209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.561991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.562063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.564098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.564142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.567222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.567723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.567787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.569315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.569705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.569725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.569900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.569978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.570036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.570087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.572691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.574276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.574336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.574835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.575322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.575343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.575515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.575572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.575623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.575674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.578615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.579904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.579963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.581730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.582052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.582073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.582247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.582303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.582355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.582406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.585719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.586720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.586788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.586840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.587233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.587254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.587424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.587488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.587548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.587600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.590161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.269 [2024-07-11 02:45:18.590231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.590284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.590335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.590650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.590670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.590853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.590911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.590962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.591017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.594983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.595034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.597676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.597738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.597797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.597848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.598358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.598381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.598551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.598608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.598661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.598719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.601725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.601804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.601858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.601916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.602231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.602252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.602420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.602476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.602528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.602586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.605113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.605176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.605230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.605285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.605785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.605807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.605975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.606030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.606081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.606137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.608969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.609950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.612578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.612642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.612694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.612746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.613067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.613088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.613260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.613314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.613366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.613419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.615969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.616834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.619766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.619827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.619879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.619930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.620406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.620427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.620595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.620651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.620704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.620769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.623286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.623349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.270 [2024-07-11 02:45:18.623400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.623454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.623800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.623822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.623993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.624048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.624099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.624150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.626995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.627496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.627551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.629081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.629459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.629480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.629653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.629718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.629778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.629829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.632228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.633384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.633443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.633494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.633966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.633987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.634158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.634214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.634267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.634319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.639126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.639200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.639251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.640244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.640571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.640592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.640799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.640857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.642867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.642932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.645731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.645800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.646289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.646343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.646658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.646679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.648819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.648886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.650917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.650975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.653525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.654641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.654700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.654754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.655245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.655266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.656775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.656841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.657329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.657386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.661775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.661848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.661900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.663923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.664420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.664440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.665919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.665984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.666574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.666630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.669323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.669390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.670890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.670946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.671262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.671283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.673355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.673418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.675439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.675496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.678573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.679701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.679766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.679819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.680175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.680196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.680392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.680457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.682491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.682548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.686844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.686910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.687401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.687472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.687795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.687816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.688420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.271 [2024-07-11 02:45:18.688482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.690353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.690418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.694730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.694804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.696029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.696086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.696406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.696427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.697042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.697122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.698875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.698932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.703280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.703349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.705359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.705426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.705903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.705924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.707547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.707612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.709428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.709492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.713198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.713268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.714086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.714153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.714667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.714688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.715710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.715781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.716752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.716832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.721423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.721500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.723048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.723104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.723593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.723614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.725180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.533 [2024-07-11 02:45:18.725245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.725733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.725798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.729360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.729438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.731314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.731372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.731689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.731710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.732319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.732384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.733972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.734029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.738824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.738903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.739395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.739462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.739820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.739842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.740448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.740513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.741013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.741074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.745736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.745810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.746756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.746826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.747144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.747164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.748851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.748917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.750920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.750978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.754425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.754496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.755820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.755876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.756260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.756280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.756939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.757007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.757639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.757689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.761121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.761199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.761687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.761743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.762191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.762213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.762524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.764521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.764587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.764641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.766044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.766363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.769564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.769634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.770137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.770199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.770696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.770717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.771251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.771744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.772246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.772740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.773139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.776297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.776369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.776863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.776918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.777474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.777495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.778028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.778523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.779025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.779520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.780012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.783188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.783273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.783793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.783854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.784219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.784240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.784847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.785347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.785844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.534 [2024-07-11 02:45:18.786335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.786720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.790117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.790195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.790687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.791186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.791709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.791730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.792348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.792860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.793374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.793885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.794368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.797304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.797818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.798313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.798810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.799360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.799381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.799990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.800493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.801476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.802574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.802949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.807525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.809551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.810851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.812616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.812941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.812962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.815086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.816623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.817398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.818926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.819252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.823273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.825316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.826741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.827232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.827682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.827704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.829676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.831626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.832237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.834230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.834611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.838814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.840841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.842145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.843941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.844258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.844279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.846419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.847130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.847620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.849299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.849657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.852394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.854173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.856196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.858210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.858674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.858694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.860597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.862619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.864636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.865143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.865609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.869467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.871478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.873476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.874659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.874989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.875010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.876775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.877278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.878880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.880881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.881320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.885506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.887547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.889568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.890940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.891317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.891338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.893443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.895434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.897107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.897610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.898130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.902360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.902885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.904776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.906569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.907065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.907088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.535 [2024-07-11 02:45:18.907791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.909574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.911590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.913601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.914105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.917323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.919352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.920620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.922411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.922732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.922752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.924855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.926608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.928475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.930529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.930855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.933733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.935749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.937153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.939119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.939674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.939696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.940596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.941961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.942452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.944187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.944560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.948605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.948674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.949776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.949834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.950297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.950318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.951376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.953133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.536 [2024-07-11 02:45:18.954961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.955872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.956193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.960320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.960396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.961691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.963676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.964000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.964021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.965980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.967712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.969700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.971709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.972089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.974533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.976205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.978052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.978119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.978434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.978454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.980631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.980698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.981621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.981679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.982082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.986061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.988053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.798 [2024-07-11 02:45:18.988114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.990176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.990496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.990518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.992432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.992511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.994545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.994603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.994924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.999302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:18.999371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.001392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.002670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.003033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.003054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.005188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.005265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.007007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.007064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.007380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.009790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.011829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.013178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.013236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.013553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.013573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.015712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.015784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.017805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.017864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.018330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.022212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.024000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.024067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.026036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.026354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.026376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.027812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.027880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.029433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.029931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.030345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.034827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.034897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.036885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.038337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.038718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.038744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.040898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.040969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.042557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.042613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.043149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.045394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.046400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.046461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.048227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.048546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.048566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.050727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.050804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.052451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.052510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.053048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.055247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.057233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.057293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.059016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.059374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.059395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.061537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.061602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.062914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.062972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.063426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.065423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.066984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.067044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.068978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.069299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.069320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.070916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.070993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.072789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.072845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.073349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.075279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.076901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.076961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.078934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.799 [2024-07-11 02:45:19.079252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.079274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.080866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.080931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.082827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.082887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.083202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.085077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.086189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.086249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.087993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.088313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.088334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.090427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.090495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.092233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.092300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.092616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.094157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.095504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.095563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.096143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.096460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.096481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.098617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.098682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.100698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.100756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.101217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.102210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.104244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.104304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.104805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.105340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.105363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.107259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.107323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.109342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.109399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.109714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.110898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.112922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.112982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.115007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.115519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.115541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.116745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.116815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.117302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.117364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.117681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.119032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.120820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.120878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.122897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.123217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.123238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.124444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.124506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.126257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.128268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.128589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.129804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.131802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.131877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.133889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.134205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.134225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.134395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.135389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.137156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.137215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.137530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.138572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.139948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.140008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.142009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.142479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.142500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.142677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.142732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.142790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.142842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.143205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.144405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.145937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.145999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.146485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.146864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.146885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.147064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.147120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.147172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.147244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.147707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.148979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.150140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.150200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.150957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.800 [2024-07-11 02:45:19.151462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.151483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.151661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.151728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.151802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.151867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.152184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.153297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.153810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.153867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.153919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.154253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.154273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.154443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.154508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.154560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.154629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.155144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.156289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.156350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.156401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.156459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.156941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.156961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.157133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.157188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.157243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.157297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.157823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.159158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.159221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.159273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.159324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.159849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.159870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.160046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.160105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.160156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.160206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.160520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.161590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.161670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.161726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.161785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.162103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.162124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.162297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.162357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.162408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.162458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.162841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.164131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.164202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.164257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.164308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.164877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.164899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.165070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.165126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.165178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.165229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.165655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.166820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.166884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.166936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.166987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.167383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.167404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.167577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.167632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.167688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.167739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.168249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.169506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.169568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.169620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.169671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.170205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.170225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.170400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.170455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.170506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.170557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.171072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.172225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.172285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.172336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.172387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.172827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.172849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.173025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.173080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.173131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.173184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.173721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.175093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.175154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.175206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.801 [2024-07-11 02:45:19.175258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.175741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.175768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.175950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.176016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.176081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.176132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.176445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.177491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.177551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.177605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.177666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.177987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.178009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.178182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.178237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.178306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.178360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.178851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.180942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.181259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.182500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.183186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.183248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.184449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.184936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.184962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.185141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.185196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.185248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.185303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.185618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.186942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.187446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.187511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.187563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.187975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.187996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.188170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.188226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.188277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.188328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.188781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.191369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.191437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.191489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.193146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.193464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.193485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.193657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.194747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.194813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.196816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.197266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.198324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.198394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.198897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.198964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.199369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.199390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.199558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.201548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.201607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.202100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.202471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.203781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.204301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.204369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.204435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.204925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.204946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.205115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.205612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.205677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.206182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.206637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.208217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.208286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.208340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.208830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.209185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.209205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.209374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.209881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.209942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.802 [2024-07-11 02:45:19.211125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.211618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.212879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.212953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.213444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.213504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.213827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.213848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.214017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.215167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.215225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.215276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.215744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.217153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.218456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.218514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:28.803 [2024-07-11 02:45:19.218571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.063 [2024-07-11 02:45:19.218891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.218912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.219085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.219579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.219638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.220138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.220455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.222050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.222120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.222754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.222819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.223151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.223171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.223338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.223862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.223932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.225264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.225770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.229245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.229329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.231053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.231108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.231443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.231463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.231628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.232144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.232230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.232720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.233216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.236869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.236940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.237431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.237517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.238045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.238066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.238236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.238731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.238799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.239289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.239826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.242569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.242639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.244233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.244288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.244605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.244626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.244808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.246452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.246511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.248524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.248950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.255523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.255594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.257373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.257434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.257752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.257780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.257950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.259945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.260006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.261989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.262307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.266375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.266447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.268431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.268497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.268818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.268839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.269007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.270169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.270227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.271982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.272400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.278087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.278158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.280177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.280235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.280556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.280577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.280743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.281242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.281297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.282634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.282996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.289496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.289564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.290057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.290112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.290454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.290475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.290643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.292640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.292699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.293349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.293733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.302264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.302333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.303615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.303672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.304067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.304087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.304279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.064 [2024-07-11 02:45:19.306289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.306348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.306416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.306731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.314079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.314151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.315732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.315798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.316114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.316134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.318034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.318105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.318182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.319458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.319813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.327571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.327640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.329626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.329696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.330169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.330190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.331985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.334004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.336008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.337339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.337694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.344960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.345031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.345647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.345714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.346036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.346057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.348074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.349669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.350417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.351910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.352268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.359564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.359634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.360374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.360431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.360757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.360784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.361998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.363763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.364654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.366478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.366876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.372954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.373023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.374803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.376803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.377121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.377141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.378845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.379574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.381044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.382820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.383201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.391068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.392787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.393288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.394852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.395170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.395190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.395797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.397840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.399407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.401392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.401934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.409789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.411813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.413087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.414983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.415477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.415500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.417516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.418948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.420156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.421915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.422233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.428773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.430767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.432781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.434810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.435251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.435272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.437401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.439426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.441442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.442424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.065 [2024-07-11 02:45:19.442751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.450620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.452643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.454269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.456085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.456633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.456660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.458270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.459729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.461714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.463699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.464020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.469620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.471400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.473433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.475459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.475956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.475977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.477865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.479898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.481915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.483144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.066 [2024-07-11 02:45:19.483528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.491511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.493500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.495521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.496027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.496511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.496534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.498627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.500644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.502674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.503826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.504143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.511910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.513947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.515961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.516974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.517347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.517368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.519507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.521533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.522080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.522573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.522900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.530100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.531384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.533273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.533995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.534341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.534361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.536505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.538547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.539528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.541293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.541610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.549167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.551037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.553051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.555076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.555543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.555565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.557295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.558904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.559470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.561248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.561567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.569232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.571106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.571770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.573542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.573866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.573887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.576038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.577023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.578806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.580812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.327 [2024-07-11 02:45:19.581130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.587978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.588063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.590083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.590140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.590456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.590477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.591828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.593837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.594612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.596634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.596962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.601749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.601824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.602312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.604254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.604736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.604762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.606512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.607013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.607509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.608022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.608509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.614076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.616064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.616559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.616636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.617143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.617164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.617807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.617876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.619676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.619738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.620271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.626592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.627309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.627372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.629013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.629441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.629461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.630395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.630462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.631995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.632050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.632403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.639315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.639385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.640489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.641519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.641842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.641864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.642466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.642543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.643045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.643106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.643650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.648509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.649519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.651500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.651564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.652007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.652028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.653476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.653543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.654039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.654093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.654515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.660513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.662208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.662272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.664138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.664632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.664653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.665264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.665357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.665871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.667461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.667941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.673475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.673545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.674749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.676494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.677020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.677041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.678743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.678812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.680607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.680671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.681201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.685130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.687156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.687214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.688442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.688764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.688785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.690300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.690365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.692329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.692388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.328 [2024-07-11 02:45:19.692703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.695072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.695581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.695648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.697230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.697547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.697568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.698693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.698766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.700769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.700826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.701314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.703449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.703962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.704034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.705775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.706201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.706221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.707237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.707301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.707805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.707866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.708299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.710513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.711032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.711096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.711583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.712115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.712138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.712740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.712817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.713308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.713367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.713895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.716826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.717328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.717389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.717879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.718245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.718266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.718881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.718948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.719434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.719488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.719927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.722423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.722940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.723016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.723502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.723881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.723902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.724509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.724582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.726574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.726640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.727110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.731209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.733010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.733069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.735085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.735402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.735422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.736860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.736926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.737609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.737666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.738001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.742853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.744738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.744804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.746804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.747238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.747259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.748897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.329 [2024-07-11 02:45:19.748963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.750807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.750866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.751270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.753171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.754629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.754691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.756673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.757144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.757165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.758124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.758187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.759711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.761476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.761854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.764980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.766953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.767018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.769046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.769414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.769435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.769603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.770104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.771280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.771337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.771697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.772756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.774602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.774686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.775186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.775718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.775745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.775923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.775979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.776030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.776080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.776463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.777457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.779216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.779282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.781257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.781575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.781596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.781777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.781833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.781884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.781935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.782422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.783485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.785465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.785526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.787521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.787924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.787945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.788119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.788174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.788225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.788276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.788635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.789630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.790972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.791023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.791337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.792490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.792552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.792603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.792661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.792982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.793002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.793174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.793229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.793306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.793359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.793817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.794917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.794986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.592 [2024-07-11 02:45:19.795769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.796162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.797236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.797302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.797354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.797405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.797823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.797844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.798017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.798072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.798124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.798175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.798569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.799593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.799656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.799707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.799765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.800194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.800214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.800391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.800446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.800497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.800548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.800950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.802278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.802340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.802391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.802444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.802765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.802785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.802956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.803011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.803067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.803122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.803437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.804637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.804699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.804749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.804813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.805128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.805149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.805319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.805373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.805425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.805475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.805797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.807935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.808249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.809376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.809444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.809521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.809574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.809895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.809916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.810087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.810147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.810198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.810255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.810570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.811836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.811897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.811948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.812012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.812329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.812349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.812520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.812576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.812631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.812682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.813106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.814157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.814219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.814273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.814332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.814778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.814799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.814971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.815026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.815079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.815130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.593 [2024-07-11 02:45:19.815497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.816613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.818283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.818342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.820143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.820465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.820486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.820659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.820714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.820773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.820825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.821139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.822399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.824987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.825301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.826828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.826894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.826946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.828917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.829237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.829257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.829426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.831445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.831502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.832783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.833153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.834163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.834233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.834727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.834788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.835315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.835335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.835499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.837276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.837333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.839337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.839657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.840948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.842986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.843045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.843096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.843557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.843578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.843746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.845096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.845153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.847139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.847623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.849146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.849255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.849313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.851297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.851704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.851725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.851935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.852435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.852494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.854395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.854752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.855737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.855805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.857653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.857710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.858031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.858051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.858220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.860239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.860298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.860356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.860838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.862039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.863914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.863982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.864046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.864361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.864382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.864553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.865923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.865981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.867723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.868045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.871203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.871279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.873294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.873350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.873665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.873686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.873861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.874988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.875052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.877049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.594 [2024-07-11 02:45:19.877366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.880989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.881058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.883060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.883117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.883431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.883451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.883616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.885145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.885203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.887184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.887500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.892532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.892602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.894616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.894673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.894994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.895014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.895181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.896472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.896530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.898516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.898839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.903351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.903420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.904805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.904860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.905174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.905203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.905368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.907381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.907440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.909452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.909880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.913414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.913485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.914795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.914851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.915167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.915187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.915352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.917370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.917427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.919448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.919908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.924197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.924269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.926125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.926181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.926497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.926517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.926687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.928679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.928739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.930767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.931221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.934510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.934579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.936600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.936663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.936989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.937010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.937174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.938939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.939005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.940982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.941300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.945736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.945810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.946957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.947016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.947333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.947354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.947518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.949530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.949587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.951609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.952099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.955641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.955710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.956788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.956846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.957164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.957185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.957380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.959410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.959469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.959526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.959848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.963388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.963465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.965482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.965539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.965980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.966001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.968041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.968107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.968159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.970135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.595 [2024-07-11 02:45:19.970454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.974756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.974829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.976728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.976796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.977114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.977134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.979156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.980766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.981257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.982174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.982554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.986873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.986949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.987439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.987498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.988007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.988030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.988577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.990463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.991013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.992606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.993152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.995752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.995851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.996344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.996402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.996752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.996778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.997375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.997895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:19.998747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.000748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.001165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.004960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.005041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.005534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.006044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.006492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.006512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.007120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.008149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.009356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.009855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.596 [2024-07-11 02:45:20.010174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.014249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.016238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.016739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.017238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.017664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.017686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.018301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.018822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.019912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.021922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.022420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.026787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.027293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.027792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.029597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.029992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.030013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.031533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.032045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.033383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.034187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.034506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.036641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.037174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.037999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.040019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.040433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.040455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.042425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.044303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.044800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.045294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.045752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.048965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.050184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.051104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.051595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.051932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.051953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.052622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.054171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.054664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.055168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.055625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.059074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.061106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.061757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.062258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.062621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.062642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.063249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.063750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.064249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.064744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.065264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.068846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.069356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.069854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.070357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.070881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.070902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.071498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.072002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.072502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.073001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.073401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.075844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.076348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.076853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.077345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.077808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.077831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.079663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.080914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.081906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.082491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.082889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.085033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.085535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.086867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.088518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.088842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.088862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.089515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.091173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.093149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.094399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.094945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.857 [2024-07-11 02:45:20.099674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.101704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.103725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.104560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.105025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.105046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.106290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.108066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.108894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.110807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.111160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.115562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.116945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.118736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.120778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.121098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.121118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.121715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.122826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.124603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.126628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.126954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.131275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.131344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.131838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.131894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.132316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.132339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.134328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.136240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.136811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.138809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.139200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.143455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.143526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.145547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.147123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.147483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.147504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.149650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.151421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.151924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.152414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.152733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.156628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.157142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.159140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.159201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.159518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.159540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.161702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.161780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.163079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.163137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.163501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.169294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.171314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.171373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.173294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.173678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.173702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.175753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.175829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.177840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.177899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.178353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.181898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.181970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.182646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.184644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.185044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.185065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.185678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.185743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.187727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.187795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.188112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.190705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.192608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.194625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.194684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.195210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.195230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.195845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.195908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.197694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.197766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.198084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.201555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.203353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.203414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.205429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.205750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.205778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.207638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.207710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.209702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.210842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.211161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.858 [2024-07-11 02:45:20.215409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.215486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.217135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.218946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.219266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.219286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.221043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.221109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.221595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.221648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.221976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.224972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.226971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.227030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.227807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.228213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.228234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.228434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.228529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.233724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.233808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.235826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.236144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.236341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.236430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:29.859 [2024-07-11 02:45:20.243787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:39:31.235 00:39:31.235 Latency(us) 00:39:31.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.235 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x0 length 0x100 00:39:31.235 crypto_ram : 5.87 43.58 2.72 0.00 0.00 2842232.65 53340.61 2888598.93 00:39:31.235 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x100 length 0x100 00:39:31.235 crypto_ram : 5.99 37.05 2.32 0.00 0.00 3128988.47 53112.65 3472154.27 00:39:31.235 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x0 length 0x100 00:39:31.235 crypto_ram2 : 5.88 43.91 2.74 0.00 0.00 2729072.11 27810.06 2888598.93 00:39:31.235 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x100 length 0x100 00:39:31.235 crypto_ram2 : 6.04 41.89 2.62 0.00 0.00 2719209.94 29405.72 3515920.92 00:39:31.235 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x0 length 0x100 00:39:31.235 crypto_ram3 : 5.60 287.37 17.96 0.00 0.00 397572.17 46730.02 543435.91 00:39:31.235 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x100 length 0x100 00:39:31.235 crypto_ram3 : 5.82 247.30 15.46 0.00 0.00 439332.73 15728.64 492374.82 00:39:31.235 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x0 length 0x100 00:39:31.235 crypto_ram4 : 5.70 299.85 18.74 0.00 0.00 368322.83 26100.42 443137.34 00:39:31.235 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:31.235 Verification LBA range: start 0x100 length 0x100 00:39:31.235 crypto_ram4 : 5.33 211.98 13.25 0.00 0.00 575576.64 82518.37 1116049.59 00:39:31.235 =================================================================================================================== 00:39:31.235 Total : 1212.91 75.81 0.00 0.00 780520.89 15728.64 3515920.92 00:39:31.235 00:39:31.235 real 0m9.302s 00:39:31.235 user 0m17.563s 00:39:31.235 sys 0m0.610s 00:39:31.235 02:45:21 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:31.235 02:45:21 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:39:31.235 ************************************ 00:39:31.235 END TEST bdev_verify_big_io 00:39:31.235 ************************************ 00:39:31.493 02:45:21 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:39:31.493 02:45:21 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:31.493 02:45:21 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:39:31.493 02:45:21 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:31.493 02:45:21 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:31.493 ************************************ 00:39:31.493 START TEST bdev_write_zeroes 00:39:31.493 ************************************ 00:39:31.494 02:45:21 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:31.494 [2024-07-11 02:45:21.780999] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:31.494 [2024-07-11 02:45:21.781063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117634 ] 00:39:31.494 [2024-07-11 02:45:21.916108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:31.751 [2024-07-11 02:45:21.964224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:31.752 [2024-07-11 02:45:21.985478] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:31.752 [2024-07-11 02:45:21.993506] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:31.752 [2024-07-11 02:45:22.001524] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:31.752 [2024-07-11 02:45:22.103857] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:34.279 [2024-07-11 02:45:24.483741] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:34.279 [2024-07-11 02:45:24.483803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:34.279 [2024-07-11 02:45:24.483818] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:34.279 [2024-07-11 02:45:24.491767] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:34.279 [2024-07-11 02:45:24.491787] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:34.280 [2024-07-11 02:45:24.491798] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:34.280 [2024-07-11 02:45:24.499786] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:34.280 [2024-07-11 02:45:24.499804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:34.280 [2024-07-11 02:45:24.499816] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:34.280 [2024-07-11 02:45:24.507809] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:34.280 [2024-07-11 02:45:24.507826] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:34.280 [2024-07-11 02:45:24.507837] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:34.280 Running I/O for 1 seconds... 00:39:35.213 00:39:35.213 Latency(us) 00:39:35.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.213 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:35.213 crypto_ram : 1.02 1999.66 7.81 0.00 0.00 63406.64 5413.84 77047.54 00:39:35.213 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:35.213 crypto_ram2 : 1.03 2005.57 7.83 0.00 0.00 62864.65 5385.35 71576.71 00:39:35.213 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:35.213 crypto_ram3 : 1.02 15302.87 59.78 0.00 0.00 8214.67 2436.23 10713.71 00:39:35.213 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:35.213 crypto_ram4 : 1.02 15347.47 59.95 0.00 0.00 8163.78 1951.83 8605.16 00:39:35.213 =================================================================================================================== 00:39:35.213 Total : 34655.57 135.37 0.00 0.00 14559.18 1951.83 77047.54 00:39:35.779 00:39:35.779 real 0m4.303s 00:39:35.779 user 0m3.740s 00:39:35.779 sys 0m0.521s 00:39:35.779 02:45:26 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:35.779 02:45:26 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:39:35.779 ************************************ 00:39:35.779 END TEST bdev_write_zeroes 00:39:35.779 ************************************ 00:39:35.779 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:39:35.779 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:35.779 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:39:35.779 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:35.779 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:35.779 ************************************ 00:39:35.779 START TEST bdev_json_nonenclosed 00:39:35.779 ************************************ 00:39:35.779 02:45:26 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:35.779 [2024-07-11 02:45:26.155255] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:35.779 [2024-07-11 02:45:26.155317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118259 ] 00:39:36.037 [2024-07-11 02:45:26.292725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.037 [2024-07-11 02:45:26.340244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.037 [2024-07-11 02:45:26.340312] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:39:36.037 [2024-07-11 02:45:26.340332] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:36.037 [2024-07-11 02:45:26.340344] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:36.037 00:39:36.037 real 0m0.328s 00:39:36.037 user 0m0.177s 00:39:36.037 sys 0m0.149s 00:39:36.037 02:45:26 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:39:36.037 02:45:26 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:36.037 02:45:26 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:39:36.037 ************************************ 00:39:36.037 END TEST bdev_json_nonenclosed 00:39:36.037 ************************************ 00:39:36.295 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:39:36.295 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # true 00:39:36.295 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:36.295 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:39:36.295 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:36.295 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:36.295 ************************************ 00:39:36.295 START TEST bdev_json_nonarray 00:39:36.295 ************************************ 00:39:36.295 02:45:26 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:36.295 [2024-07-11 02:45:26.572410] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:36.295 [2024-07-11 02:45:26.572470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118302 ] 00:39:36.295 [2024-07-11 02:45:26.708364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.554 [2024-07-11 02:45:26.756941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.554 [2024-07-11 02:45:26.757010] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:39:36.554 [2024-07-11 02:45:26.757030] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:36.554 [2024-07-11 02:45:26.757042] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:36.554 00:39:36.554 real 0m0.327s 00:39:36.554 user 0m0.174s 00:39:36.554 sys 0m0.151s 00:39:36.554 02:45:26 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:39:36.554 02:45:26 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:36.554 02:45:26 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:39:36.554 ************************************ 00:39:36.554 END TEST bdev_json_nonarray 00:39:36.554 ************************************ 00:39:36.554 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # true 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:39:36.554 02:45:26 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:39:36.554 00:39:36.554 real 1m12.968s 00:39:36.554 user 2m40.350s 00:39:36.554 sys 0m10.603s 00:39:36.554 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:36.554 02:45:26 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:36.554 ************************************ 00:39:36.554 END TEST blockdev_crypto_aesni 00:39:36.554 ************************************ 00:39:36.554 02:45:26 -- common/autotest_common.sh@1142 -- # return 0 00:39:36.554 02:45:26 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:39:36.554 02:45:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:36.554 02:45:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:36.554 02:45:26 -- common/autotest_common.sh@10 -- # set +x 00:39:36.813 ************************************ 00:39:36.813 START TEST blockdev_crypto_sw 00:39:36.813 ************************************ 00:39:36.813 02:45:26 blockdev_crypto_sw -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:39:36.813 * Looking for test storage... 00:39:36.813 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2118369 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 2118369 00:39:36.813 02:45:27 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:39:36.813 02:45:27 blockdev_crypto_sw -- common/autotest_common.sh@829 -- # '[' -z 2118369 ']' 00:39:36.813 02:45:27 blockdev_crypto_sw -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:36.813 02:45:27 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:36.813 02:45:27 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:36.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:36.813 02:45:27 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:36.813 02:45:27 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:36.813 [2024-07-11 02:45:27.179608] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:36.813 [2024-07-11 02:45:27.179681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118369 ] 00:39:37.072 [2024-07-11 02:45:27.317395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.072 [2024-07-11 02:45:27.367782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.006 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:38.006 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@862 -- # return 0 00:39:38.006 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:39:38.006 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:39:38.006 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:39:38.006 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.006 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:38.006 Malloc0 00:39:38.006 Malloc1 00:39:38.006 true 00:39:38.006 true 00:39:38.006 true 00:39:38.006 [2024-07-11 02:45:28.385845] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:39:38.006 crypto_ram 00:39:38.006 [2024-07-11 02:45:28.393871] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:39:38.006 crypto_ram2 00:39:38.006 [2024-07-11 02:45:28.401893] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:39:38.006 crypto_ram3 00:39:38.006 [ 00:39:38.006 { 00:39:38.006 "name": "Malloc1", 00:39:38.006 "aliases": [ 00:39:38.006 "eae2a89e-37de-4a92-a196-3774fa04a275" 00:39:38.006 ], 00:39:38.006 "product_name": "Malloc disk", 00:39:38.006 "block_size": 4096, 00:39:38.006 "num_blocks": 4096, 00:39:38.006 "uuid": "eae2a89e-37de-4a92-a196-3774fa04a275", 00:39:38.006 "assigned_rate_limits": { 00:39:38.006 "rw_ios_per_sec": 0, 00:39:38.006 "rw_mbytes_per_sec": 0, 00:39:38.006 "r_mbytes_per_sec": 0, 00:39:38.006 "w_mbytes_per_sec": 0 00:39:38.006 }, 00:39:38.006 "claimed": true, 00:39:38.006 "claim_type": "exclusive_write", 00:39:38.006 "zoned": false, 00:39:38.006 "supported_io_types": { 00:39:38.006 "read": true, 00:39:38.006 "write": true, 00:39:38.006 "unmap": true, 00:39:38.006 "flush": true, 00:39:38.006 "reset": true, 00:39:38.006 "nvme_admin": false, 00:39:38.006 "nvme_io": false, 00:39:38.006 "nvme_io_md": false, 00:39:38.006 "write_zeroes": true, 00:39:38.006 "zcopy": true, 00:39:38.006 "get_zone_info": false, 00:39:38.006 "zone_management": false, 00:39:38.006 "zone_append": false, 00:39:38.006 "compare": false, 00:39:38.006 "compare_and_write": false, 00:39:38.006 "abort": true, 00:39:38.006 "seek_hole": false, 00:39:38.006 "seek_data": false, 00:39:38.006 "copy": true, 00:39:38.006 "nvme_iov_md": false 00:39:38.006 }, 00:39:38.006 "memory_domains": [ 00:39:38.006 { 00:39:38.006 "dma_device_id": "system", 00:39:38.006 "dma_device_type": 1 00:39:38.006 }, 00:39:38.006 { 00:39:38.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:38.006 "dma_device_type": 2 00:39:38.006 } 00:39:38.006 ], 00:39:38.006 "driver_specific": {} 00:39:38.006 } 00:39:38.006 ] 00:39:38.006 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.006 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "cb3b4c2a-b541-50d1-a6a7-91d65bf7b17c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cb3b4c2a-b541-50d1-a6a7-91d65bf7b17c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "665a05ea-4f1e-5db5-8b56-d24741e26045"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "665a05ea-4f1e-5db5-8b56-d24741e26045",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:39:38.263 02:45:28 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 2118369 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@948 -- # '[' -z 2118369 ']' 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # kill -0 2118369 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # uname 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:38.263 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2118369 00:39:38.521 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:38.521 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:38.521 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2118369' 00:39:38.521 killing process with pid 2118369 00:39:38.521 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # kill 2118369 00:39:38.521 02:45:28 blockdev_crypto_sw -- common/autotest_common.sh@972 -- # wait 2118369 00:39:38.780 02:45:29 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:38.780 02:45:29 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:38.780 02:45:29 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:39:38.780 02:45:29 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:38.780 02:45:29 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:38.780 ************************************ 00:39:38.780 START TEST bdev_hello_world 00:39:38.780 ************************************ 00:39:38.780 02:45:29 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:38.780 [2024-07-11 02:45:29.160298] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:38.780 [2024-07-11 02:45:29.160350] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118732 ] 00:39:39.037 [2024-07-11 02:45:29.280736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.037 [2024-07-11 02:45:29.333188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.295 [2024-07-11 02:45:29.506118] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:39:39.295 [2024-07-11 02:45:29.506182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:39.295 [2024-07-11 02:45:29.506198] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:39.295 [2024-07-11 02:45:29.514134] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:39:39.295 [2024-07-11 02:45:29.514153] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:39.295 [2024-07-11 02:45:29.514165] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:39.295 [2024-07-11 02:45:29.522156] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:39:39.295 [2024-07-11 02:45:29.522174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:39:39.295 [2024-07-11 02:45:29.522185] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:39.295 [2024-07-11 02:45:29.563834] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:39.295 [2024-07-11 02:45:29.563879] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:39:39.295 [2024-07-11 02:45:29.563898] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:39.295 [2024-07-11 02:45:29.565238] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:39.295 [2024-07-11 02:45:29.565312] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:39.295 [2024-07-11 02:45:29.565328] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:39.295 [2024-07-11 02:45:29.565363] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:39.295 00:39:39.295 [2024-07-11 02:45:29.565380] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:39.553 00:39:39.553 real 0m0.670s 00:39:39.553 user 0m0.422s 00:39:39.553 sys 0m0.227s 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:39.553 ************************************ 00:39:39.553 END TEST bdev_hello_world 00:39:39.553 ************************************ 00:39:39.553 02:45:29 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:39:39.553 02:45:29 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:39:39.553 02:45:29 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:39.553 02:45:29 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:39.553 02:45:29 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:39.553 ************************************ 00:39:39.553 START TEST bdev_bounds 00:39:39.553 ************************************ 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2118759 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2118759' 00:39:39.553 Process bdevio pid: 2118759 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2118759 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2118759 ']' 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:39.553 02:45:29 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:39.553 [2024-07-11 02:45:29.921708] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:39.553 [2024-07-11 02:45:29.921770] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118759 ] 00:39:39.810 [2024-07-11 02:45:30.042869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:39.810 [2024-07-11 02:45:30.097330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.810 [2024-07-11 02:45:30.098794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:39.810 [2024-07-11 02:45:30.098797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.067 [2024-07-11 02:45:30.270803] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:39:40.068 [2024-07-11 02:45:30.270873] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:40.068 [2024-07-11 02:45:30.270893] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:40.068 [2024-07-11 02:45:30.278816] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:39:40.068 [2024-07-11 02:45:30.278836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:40.068 [2024-07-11 02:45:30.278847] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:40.068 [2024-07-11 02:45:30.286838] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:39:40.068 [2024-07-11 02:45:30.286857] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:39:40.068 [2024-07-11 02:45:30.286868] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:40.631 02:45:30 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:40.631 02:45:30 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:39:40.631 02:45:30 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:40.631 I/O targets: 00:39:40.632 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:39:40.632 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:39:40.632 00:39:40.632 00:39:40.632 CUnit - A unit testing framework for C - Version 2.1-3 00:39:40.632 http://cunit.sourceforge.net/ 00:39:40.632 00:39:40.632 00:39:40.632 Suite: bdevio tests on: crypto_ram3 00:39:40.632 Test: blockdev write read block ...passed 00:39:40.632 Test: blockdev write zeroes read block ...passed 00:39:40.632 Test: blockdev write zeroes read no split ...passed 00:39:40.632 Test: blockdev write zeroes read split ...passed 00:39:40.632 Test: blockdev write zeroes read split partial ...passed 00:39:40.632 Test: blockdev reset ...passed 00:39:40.632 Test: blockdev write read 8 blocks ...passed 00:39:40.632 Test: blockdev write read size > 128k ...passed 00:39:40.632 Test: blockdev write read invalid size ...passed 00:39:40.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:40.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:40.632 Test: blockdev write read max offset ...passed 00:39:40.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:40.632 Test: blockdev writev readv 8 blocks ...passed 00:39:40.632 Test: blockdev writev readv 30 x 1block ...passed 00:39:40.632 Test: blockdev writev readv block ...passed 00:39:40.632 Test: blockdev writev readv size > 128k ...passed 00:39:40.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:40.632 Test: blockdev comparev and writev ...passed 00:39:40.632 Test: blockdev nvme passthru rw ...passed 00:39:40.632 Test: blockdev nvme passthru vendor specific ...passed 00:39:40.632 Test: blockdev nvme admin passthru ...passed 00:39:40.632 Test: blockdev copy ...passed 00:39:40.632 Suite: bdevio tests on: crypto_ram 00:39:40.632 Test: blockdev write read block ...passed 00:39:40.632 Test: blockdev write zeroes read block ...passed 00:39:40.632 Test: blockdev write zeroes read no split ...passed 00:39:40.632 Test: blockdev write zeroes read split ...passed 00:39:40.632 Test: blockdev write zeroes read split partial ...passed 00:39:40.632 Test: blockdev reset ...passed 00:39:40.632 Test: blockdev write read 8 blocks ...passed 00:39:40.632 Test: blockdev write read size > 128k ...passed 00:39:40.632 Test: blockdev write read invalid size ...passed 00:39:40.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:40.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:40.632 Test: blockdev write read max offset ...passed 00:39:40.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:40.632 Test: blockdev writev readv 8 blocks ...passed 00:39:40.632 Test: blockdev writev readv 30 x 1block ...passed 00:39:40.632 Test: blockdev writev readv block ...passed 00:39:40.632 Test: blockdev writev readv size > 128k ...passed 00:39:40.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:40.632 Test: blockdev comparev and writev ...passed 00:39:40.632 Test: blockdev nvme passthru rw ...passed 00:39:40.632 Test: blockdev nvme passthru vendor specific ...passed 00:39:40.632 Test: blockdev nvme admin passthru ...passed 00:39:40.632 Test: blockdev copy ...passed 00:39:40.632 00:39:40.632 Run Summary: Type Total Ran Passed Failed Inactive 00:39:40.632 suites 2 2 n/a 0 0 00:39:40.632 tests 46 46 46 0 0 00:39:40.632 asserts 260 260 260 0 n/a 00:39:40.632 00:39:40.632 Elapsed time = 0.197 seconds 00:39:40.632 0 00:39:40.632 02:45:31 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2118759 00:39:40.632 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2118759 ']' 00:39:40.632 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2118759 00:39:40.632 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:39:40.632 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:40.632 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2118759 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2118759' 00:39:40.890 killing process with pid 2118759 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2118759 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2118759 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:39:40.890 00:39:40.890 real 0m1.405s 00:39:40.890 user 0m3.614s 00:39:40.890 sys 0m0.396s 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:40.890 02:45:31 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:40.890 ************************************ 00:39:40.890 END TEST bdev_bounds 00:39:40.890 ************************************ 00:39:41.148 02:45:31 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:39:41.148 02:45:31 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:39:41.148 02:45:31 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:39:41.148 02:45:31 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:41.148 02:45:31 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:41.148 ************************************ 00:39:41.148 START TEST bdev_nbd 00:39:41.148 ************************************ 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2118975 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2118975 /var/tmp/spdk-nbd.sock 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2118975 ']' 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:41.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:41.148 02:45:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:41.148 [2024-07-11 02:45:31.429653] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:39:41.148 [2024-07-11 02:45:31.429719] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.148 [2024-07-11 02:45:31.558185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.406 [2024-07-11 02:45:31.606116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.406 [2024-07-11 02:45:31.774560] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:39:41.406 [2024-07-11 02:45:31.774631] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:41.406 [2024-07-11 02:45:31.774647] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:41.406 [2024-07-11 02:45:31.782580] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:39:41.406 [2024-07-11 02:45:31.782605] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:41.406 [2024-07-11 02:45:31.782617] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:41.406 [2024-07-11 02:45:31.790602] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:39:41.406 [2024-07-11 02:45:31.790621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:39:41.406 [2024-07-11 02:45:31.790633] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:41.972 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:39:42.230 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.231 1+0 records in 00:39:42.231 1+0 records out 00:39:42.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155327 s, 26.4 MB/s 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:42.231 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.489 1+0 records in 00:39:42.489 1+0 records out 00:39:42.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596836 s, 6.9 MB/s 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:42.489 02:45:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:42.748 { 00:39:42.748 "nbd_device": "/dev/nbd0", 00:39:42.748 "bdev_name": "crypto_ram" 00:39:42.748 }, 00:39:42.748 { 00:39:42.748 "nbd_device": "/dev/nbd1", 00:39:42.748 "bdev_name": "crypto_ram3" 00:39:42.748 } 00:39:42.748 ]' 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:42.748 { 00:39:42.748 "nbd_device": "/dev/nbd0", 00:39:42.748 "bdev_name": "crypto_ram" 00:39:42.748 }, 00:39:42.748 { 00:39:42.748 "nbd_device": "/dev/nbd1", 00:39:42.748 "bdev_name": "crypto_ram3" 00:39:42.748 } 00:39:42.748 ]' 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:42.748 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:43.007 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.265 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:43.523 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:39:43.780 /dev/nbd0 00:39:43.780 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:43.780 02:45:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:43.780 02:45:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:39:43.780 02:45:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:39:43.780 02:45:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:43.780 02:45:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:43.780 02:45:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:43.780 1+0 records in 00:39:43.780 1+0 records out 00:39:43.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275204 s, 14.9 MB/s 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:43.780 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:39:44.038 /dev/nbd1 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:44.038 1+0 records in 00:39:44.038 1+0 records out 00:39:44.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351581 s, 11.7 MB/s 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:44.038 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:44.296 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:44.296 { 00:39:44.296 "nbd_device": "/dev/nbd0", 00:39:44.296 "bdev_name": "crypto_ram" 00:39:44.296 }, 00:39:44.296 { 00:39:44.296 "nbd_device": "/dev/nbd1", 00:39:44.296 "bdev_name": "crypto_ram3" 00:39:44.296 } 00:39:44.296 ]' 00:39:44.296 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:44.296 { 00:39:44.296 "nbd_device": "/dev/nbd0", 00:39:44.296 "bdev_name": "crypto_ram" 00:39:44.296 }, 00:39:44.296 { 00:39:44.296 "nbd_device": "/dev/nbd1", 00:39:44.296 "bdev_name": "crypto_ram3" 00:39:44.296 } 00:39:44.296 ]' 00:39:44.296 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:44.296 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:44.296 /dev/nbd1' 00:39:44.296 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:44.296 /dev/nbd1' 00:39:44.296 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:44.296 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:44.297 256+0 records in 00:39:44.297 256+0 records out 00:39:44.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642656 s, 163 MB/s 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:44.297 256+0 records in 00:39:44.297 256+0 records out 00:39:44.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302606 s, 34.7 MB/s 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:44.297 256+0 records in 00:39:44.297 256+0 records out 00:39:44.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0455219 s, 23.0 MB/s 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:44.297 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:44.555 02:45:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:44.813 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:45.071 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:45.071 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:45.071 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:45.329 malloc_lvol_verify 00:39:45.329 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:45.587 c8cf5cd9-d3a5-49e3-b227-16e258826b8d 00:39:45.587 02:45:35 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:45.844 a796c2d4-9c5b-4068-ac9a-3fb634bbd60f 00:39:45.844 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:46.102 /dev/nbd0 00:39:46.102 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:39:46.102 mke2fs 1.46.5 (30-Dec-2021) 00:39:46.102 Discarding device blocks: 0/4096 done 00:39:46.103 Creating filesystem with 4096 1k blocks and 1024 inodes 00:39:46.103 00:39:46.103 Allocating group tables: 0/1 done 00:39:46.103 Writing inode tables: 0/1 done 00:39:46.103 Creating journal (1024 blocks): done 00:39:46.103 Writing superblocks and filesystem accounting information: 0/1 done 00:39:46.103 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:46.103 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2118975 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2118975 ']' 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2118975 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2118975 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2118975' 00:39:46.670 killing process with pid 2118975 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2118975 00:39:46.670 02:45:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2118975 00:39:46.929 02:45:37 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:39:46.929 00:39:46.929 real 0m5.776s 00:39:46.929 user 0m8.043s 00:39:46.929 sys 0m2.474s 00:39:46.929 02:45:37 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:46.929 02:45:37 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:46.929 ************************************ 00:39:46.929 END TEST bdev_nbd 00:39:46.929 ************************************ 00:39:46.929 02:45:37 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:39:46.929 02:45:37 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:39:46.929 02:45:37 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:39:46.929 02:45:37 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:39:46.929 02:45:37 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:39:46.929 02:45:37 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:46.929 02:45:37 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:46.929 02:45:37 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:39:46.929 ************************************ 00:39:46.929 START TEST bdev_fio 00:39:46.929 ************************************ 00:39:46.929 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:39:46.929 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:39:46.929 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:39:46.930 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:46.930 ************************************ 00:39:46.930 START TEST bdev_fio_rw_verify 00:39:46.930 ************************************ 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:39:46.930 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:47.189 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:47.189 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:47.190 02:45:37 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:47.448 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:47.448 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:47.448 fio-3.35 00:39:47.448 Starting 2 threads 00:39:59.702 00:39:59.702 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2120071: Thu Jul 11 02:45:48 2024 00:39:59.702 read: IOPS=9492, BW=37.1MiB/s (38.9MB/s)(371MiB/10001msec) 00:39:59.702 slat (usec): min=32, max=168, avg=45.42, stdev= 7.85 00:39:59.702 clat (usec): min=17, max=3271, avg=333.26, stdev=134.60 00:39:59.702 lat (usec): min=63, max=3314, avg=378.68, stdev=137.65 00:39:59.702 clat percentiles (usec): 00:39:59.702 | 50.000th=[ 326], 99.000th=[ 627], 99.900th=[ 685], 99.990th=[ 799], 00:39:59.702 | 99.999th=[ 3261] 00:39:59.702 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(426MiB/9475msec); 0 zone resets 00:39:59.702 slat (usec): min=33, max=938, avg=77.31, stdev=10.10 00:39:59.702 clat (usec): min=60, max=1725, avg=447.33, stdev=204.03 00:39:59.703 lat (usec): min=125, max=1819, avg=524.64, stdev=207.47 00:39:59.703 clat percentiles (usec): 00:39:59.703 | 50.000th=[ 433], 99.000th=[ 881], 99.900th=[ 930], 99.990th=[ 1188], 00:39:59.703 | 99.999th=[ 1631] 00:39:59.703 bw ( KiB/s): min=36800, max=50872, per=94.92%, avg=43726.11, stdev=2382.85, samples=38 00:39:59.703 iops : min= 9200, max=12718, avg=10931.47, stdev=595.62, samples=38 00:39:59.703 lat (usec) : 20=0.01%, 50=0.01%, 100=0.54%, 250=23.73%, 500=48.62% 00:39:59.703 lat (usec) : 750=22.29%, 1000=4.79% 00:39:59.703 lat (msec) : 2=0.01%, 4=0.01% 00:39:59.703 cpu : usr=99.30%, sys=0.01%, ctx=34, majf=0, minf=422 00:39:59.703 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:59.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.703 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.703 issued rwts: total=94935,109117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:59.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:59.703 00:39:59.703 Run status group 0 (all jobs): 00:39:59.703 READ: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=371MiB (389MB), run=10001-10001msec 00:39:59.703 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=426MiB (447MB), run=9475-9475msec 00:39:59.703 00:39:59.703 real 0m11.179s 00:39:59.703 user 0m24.077s 00:39:59.703 sys 0m0.397s 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:39:59.703 ************************************ 00:39:59.703 END TEST bdev_fio_rw_verify 00:39:59.703 ************************************ 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "cb3b4c2a-b541-50d1-a6a7-91d65bf7b17c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cb3b4c2a-b541-50d1-a6a7-91d65bf7b17c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "665a05ea-4f1e-5db5-8b56-d24741e26045"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "665a05ea-4f1e-5db5-8b56-d24741e26045",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:39:59.703 crypto_ram3 ]] 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "cb3b4c2a-b541-50d1-a6a7-91d65bf7b17c"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cb3b4c2a-b541-50d1-a6a7-91d65bf7b17c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "665a05ea-4f1e-5db5-8b56-d24741e26045"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "665a05ea-4f1e-5db5-8b56-d24741e26045",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:59.703 ************************************ 00:39:59.703 START TEST bdev_fio_trim 00:39:59.703 ************************************ 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:59.703 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:59.704 02:45:48 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:39:59.704 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:59.704 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:59.704 fio-3.35 00:39:59.704 Starting 2 threads 00:40:09.675 00:40:09.675 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2121583: Thu Jul 11 02:45:59 2024 00:40:09.675 write: IOPS=39.4k, BW=154MiB/s (162MB/s)(1541MiB/10001msec); 0 zone resets 00:40:09.675 slat (usec): min=14, max=1591, avg=22.22, stdev= 5.08 00:40:09.675 clat (usec): min=37, max=1760, avg=166.90, stdev=91.53 00:40:09.675 lat (usec): min=52, max=1780, avg=189.13, stdev=94.85 00:40:09.675 clat percentiles (usec): 00:40:09.675 | 50.000th=[ 135], 99.000th=[ 343], 99.900th=[ 367], 99.990th=[ 498], 00:40:09.675 | 99.999th=[ 775] 00:40:09.675 bw ( KiB/s): min=154120, max=159104, per=100.00%, avg=157810.63, stdev=545.24, samples=38 00:40:09.675 iops : min=38530, max=39776, avg=39452.68, stdev=136.30, samples=38 00:40:09.675 trim: IOPS=39.4k, BW=154MiB/s (162MB/s)(1541MiB/10001msec); 0 zone resets 00:40:09.675 slat (nsec): min=5766, max=56917, avg=10011.33, stdev=2229.79 00:40:09.675 clat (usec): min=44, max=1780, avg=111.45, stdev=33.64 00:40:09.675 lat (usec): min=53, max=1795, avg=121.46, stdev=33.76 00:40:09.675 clat percentiles (usec): 00:40:09.675 | 50.000th=[ 113], 99.000th=[ 182], 99.900th=[ 196], 99.990th=[ 281], 00:40:09.675 | 99.999th=[ 578] 00:40:09.675 bw ( KiB/s): min=154144, max=159104, per=100.00%, avg=157812.32, stdev=543.76, samples=38 00:40:09.675 iops : min=38536, max=39776, avg=39452.89, stdev=135.93, samples=38 00:40:09.675 lat (usec) : 50=3.54%, 100=32.97%, 250=49.99%, 500=13.49%, 750=0.01% 00:40:09.675 lat (usec) : 1000=0.01% 00:40:09.675 lat (msec) : 2=0.01% 00:40:09.675 cpu : usr=99.59%, sys=0.00%, ctx=67, majf=0, minf=325 00:40:09.675 IO depths : 1=7.4%, 2=17.4%, 4=60.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:09.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.675 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:09.675 issued rwts: total=0,394392,394393,0 short=0,0,0,0 dropped=0,0,0,0 00:40:09.675 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:09.675 00:40:09.675 Run status group 0 (all jobs): 00:40:09.675 WRITE: bw=154MiB/s (162MB/s), 154MiB/s-154MiB/s (162MB/s-162MB/s), io=1541MiB (1615MB), run=10001-10001msec 00:40:09.675 TRIM: bw=154MiB/s (162MB/s), 154MiB/s-154MiB/s (162MB/s-162MB/s), io=1541MiB (1615MB), run=10001-10001msec 00:40:09.675 00:40:09.675 real 0m11.173s 00:40:09.675 user 0m23.756s 00:40:09.675 sys 0m0.404s 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:40:09.675 ************************************ 00:40:09.675 END TEST bdev_fio_trim 00:40:09.675 ************************************ 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:40:09.675 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:40:09.675 00:40:09.675 real 0m22.739s 00:40:09.675 user 0m48.025s 00:40:09.675 sys 0m1.018s 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:09.675 02:45:59 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:09.675 ************************************ 00:40:09.675 END TEST bdev_fio 00:40:09.675 ************************************ 00:40:09.675 02:45:59 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:40:09.675 02:46:00 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:09.675 02:46:00 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:09.675 02:46:00 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:40:09.675 02:46:00 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:09.675 02:46:00 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:40:09.675 ************************************ 00:40:09.675 START TEST bdev_verify 00:40:09.675 ************************************ 00:40:09.675 02:46:00 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:09.933 [2024-07-11 02:46:00.105425] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:09.934 [2024-07-11 02:46:00.105492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122996 ] 00:40:09.934 [2024-07-11 02:46:00.240583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:09.934 [2024-07-11 02:46:00.289248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:09.934 [2024-07-11 02:46:00.289253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.192 [2024-07-11 02:46:00.445694] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:40:10.192 [2024-07-11 02:46:00.445754] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:10.192 [2024-07-11 02:46:00.445775] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:10.192 [2024-07-11 02:46:00.453712] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:40:10.192 [2024-07-11 02:46:00.453731] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:10.192 [2024-07-11 02:46:00.453742] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:10.192 [2024-07-11 02:46:00.461735] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:40:10.192 [2024-07-11 02:46:00.461752] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:40:10.192 [2024-07-11 02:46:00.461770] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:10.192 Running I/O for 5 seconds... 00:40:15.462 00:40:15.462 Latency(us) 00:40:15.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.462 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:15.462 Verification LBA range: start 0x0 length 0x800 00:40:15.462 crypto_ram : 5.01 5956.68 23.27 0.00 0.00 21405.56 1759.50 23934.89 00:40:15.462 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:15.462 Verification LBA range: start 0x800 length 0x800 00:40:15.462 crypto_ram : 5.02 4795.36 18.73 0.00 0.00 26580.26 2080.06 27354.16 00:40:15.462 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:15.462 Verification LBA range: start 0x0 length 0x800 00:40:15.462 crypto_ram3 : 5.03 3003.56 11.73 0.00 0.00 42389.17 1994.57 28835.84 00:40:15.462 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:15.462 Verification LBA range: start 0x800 length 0x800 00:40:15.462 crypto_ram3 : 5.04 2414.10 9.43 0.00 0.00 52689.72 2407.74 33964.74 00:40:15.462 =================================================================================================================== 00:40:15.462 Total : 16169.70 63.16 0.00 0.00 31533.17 1759.50 33964.74 00:40:15.462 00:40:15.462 real 0m5.736s 00:40:15.462 user 0m10.841s 00:40:15.462 sys 0m0.234s 00:40:15.462 02:46:05 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:15.462 02:46:05 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:40:15.462 ************************************ 00:40:15.462 END TEST bdev_verify 00:40:15.462 ************************************ 00:40:15.462 02:46:05 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:40:15.462 02:46:05 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:15.462 02:46:05 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:40:15.462 02:46:05 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:15.462 02:46:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:40:15.462 ************************************ 00:40:15.462 START TEST bdev_verify_big_io 00:40:15.462 ************************************ 00:40:15.462 02:46:05 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:15.722 [2024-07-11 02:46:05.925210] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:15.722 [2024-07-11 02:46:05.925269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123710 ] 00:40:15.722 [2024-07-11 02:46:06.060862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:15.722 [2024-07-11 02:46:06.114059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.722 [2024-07-11 02:46:06.114064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.981 [2024-07-11 02:46:06.284512] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:40:15.981 [2024-07-11 02:46:06.284573] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:15.981 [2024-07-11 02:46:06.284588] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:15.981 [2024-07-11 02:46:06.292531] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:40:15.981 [2024-07-11 02:46:06.292550] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:15.981 [2024-07-11 02:46:06.292561] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:15.981 [2024-07-11 02:46:06.300554] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:40:15.981 [2024-07-11 02:46:06.300571] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:40:15.981 [2024-07-11 02:46:06.300582] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:15.981 Running I/O for 5 seconds... 00:40:22.546 00:40:22.546 Latency(us) 00:40:22.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:22.546 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:22.546 Verification LBA range: start 0x0 length 0x80 00:40:22.546 crypto_ram : 5.22 465.52 29.10 0.00 0.00 268458.13 7522.39 375663.75 00:40:22.546 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:22.546 Verification LBA range: start 0x80 length 0x80 00:40:22.546 crypto_ram : 5.23 391.86 24.49 0.00 0.00 317969.67 7978.30 419430.40 00:40:22.546 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:22.546 Verification LBA range: start 0x0 length 0x80 00:40:22.546 crypto_ram3 : 5.24 244.39 15.27 0.00 0.00 495114.60 5556.31 393899.85 00:40:22.546 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:22.546 Verification LBA range: start 0x80 length 0x80 00:40:22.546 crypto_ram3 : 5.31 216.88 13.56 0.00 0.00 551660.40 7009.50 441313.73 00:40:22.546 =================================================================================================================== 00:40:22.546 Total : 1318.66 82.42 0.00 0.00 372302.01 5556.31 441313.73 00:40:22.546 00:40:22.546 real 0m6.018s 00:40:22.546 user 0m11.385s 00:40:22.546 sys 0m0.249s 00:40:22.546 02:46:11 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:22.546 02:46:11 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:40:22.546 ************************************ 00:40:22.546 END TEST bdev_verify_big_io 00:40:22.546 ************************************ 00:40:22.546 02:46:11 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:40:22.546 02:46:11 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:22.546 02:46:11 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:40:22.546 02:46:11 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:22.546 02:46:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:40:22.546 ************************************ 00:40:22.546 START TEST bdev_write_zeroes 00:40:22.546 ************************************ 00:40:22.546 02:46:11 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:22.546 [2024-07-11 02:46:12.026994] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:22.546 [2024-07-11 02:46:12.027056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124430 ] 00:40:22.546 [2024-07-11 02:46:12.163241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.546 [2024-07-11 02:46:12.210721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.546 [2024-07-11 02:46:12.366312] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:40:22.546 [2024-07-11 02:46:12.366374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:22.546 [2024-07-11 02:46:12.366389] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:22.546 [2024-07-11 02:46:12.374331] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:40:22.546 [2024-07-11 02:46:12.374350] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:22.546 [2024-07-11 02:46:12.374362] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:22.546 [2024-07-11 02:46:12.382352] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:40:22.546 [2024-07-11 02:46:12.382370] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:40:22.546 [2024-07-11 02:46:12.382382] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:22.546 Running I/O for 1 seconds... 00:40:23.112 00:40:23.112 Latency(us) 00:40:23.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.112 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:23.112 crypto_ram : 1.01 26546.90 103.70 0.00 0.00 4811.12 2080.06 6696.07 00:40:23.112 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:23.112 crypto_ram3 : 1.01 13303.10 51.97 0.00 0.00 9544.65 3333.79 9972.87 00:40:23.112 =================================================================================================================== 00:40:23.112 Total : 39850.00 155.66 0.00 0.00 6393.98 2080.06 9972.87 00:40:23.369 00:40:23.369 real 0m1.687s 00:40:23.369 user 0m1.444s 00:40:23.369 sys 0m0.226s 00:40:23.369 02:46:13 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:23.369 02:46:13 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:40:23.369 ************************************ 00:40:23.369 END TEST bdev_write_zeroes 00:40:23.369 ************************************ 00:40:23.369 02:46:13 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:40:23.369 02:46:13 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:23.369 02:46:13 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:40:23.369 02:46:13 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:23.369 02:46:13 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:40:23.369 ************************************ 00:40:23.369 START TEST bdev_json_nonenclosed 00:40:23.369 ************************************ 00:40:23.369 02:46:13 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:23.369 [2024-07-11 02:46:13.792576] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:23.369 [2024-07-11 02:46:13.792639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124749 ] 00:40:23.626 [2024-07-11 02:46:13.928346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.626 [2024-07-11 02:46:13.976114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.626 [2024-07-11 02:46:13.976181] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:40:23.626 [2024-07-11 02:46:13.976201] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:23.627 [2024-07-11 02:46:13.976213] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:23.895 00:40:23.895 real 0m0.325s 00:40:23.895 user 0m0.158s 00:40:23.895 sys 0m0.165s 00:40:23.895 02:46:14 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:40:23.895 02:46:14 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:23.895 02:46:14 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:40:23.895 ************************************ 00:40:23.895 END TEST bdev_json_nonenclosed 00:40:23.895 ************************************ 00:40:23.895 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:40:23.895 02:46:14 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # true 00:40:23.895 02:46:14 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:23.895 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:40:23.895 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:23.895 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:40:23.895 ************************************ 00:40:23.895 START TEST bdev_json_nonarray 00:40:23.895 ************************************ 00:40:23.895 02:46:14 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:23.895 [2024-07-11 02:46:14.246476] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:23.895 [2024-07-11 02:46:14.246605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124811 ] 00:40:24.153 [2024-07-11 02:46:14.461281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.153 [2024-07-11 02:46:14.513217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.153 [2024-07-11 02:46:14.513290] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:40:24.153 [2024-07-11 02:46:14.513310] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:24.153 [2024-07-11 02:46:14.513323] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:24.410 00:40:24.410 real 0m0.455s 00:40:24.410 user 0m0.216s 00:40:24.410 sys 0m0.235s 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:40:24.410 ************************************ 00:40:24.410 END TEST bdev_json_nonarray 00:40:24.410 ************************************ 00:40:24.410 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:40:24.410 02:46:14 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # true 00:40:24.410 02:46:14 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:40:24.410 02:46:14 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:40:24.410 02:46:14 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:40:24.410 02:46:14 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:40:24.410 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:24.410 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:24.410 02:46:14 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:40:24.410 ************************************ 00:40:24.410 START TEST bdev_crypto_enomem 00:40:24.410 ************************************ 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1123 -- # bdev_crypto_enomem 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=2124841 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 2124841 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@829 -- # '[' -z 2124841 ']' 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:24.410 02:46:14 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:40:24.410 [2024-07-11 02:46:14.741265] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:24.410 [2024-07-11 02:46:14.741328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124841 ] 00:40:24.667 [2024-07-11 02:46:14.884062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.667 [2024-07-11 02:46:14.937741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@862 -- # return 0 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:40:25.600 true 00:40:25.600 base0 00:40:25.600 true 00:40:25.600 [2024-07-11 02:46:15.706135] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:40:25.600 crypt0 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local bdev_name=crypt0 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local i 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:40:25.600 [ 00:40:25.600 { 00:40:25.600 "name": "crypt0", 00:40:25.600 "aliases": [ 00:40:25.600 "59ecc1dd-8085-55c7-bd64-0e311a11b1b5" 00:40:25.600 ], 00:40:25.600 "product_name": "crypto", 00:40:25.600 "block_size": 512, 00:40:25.600 "num_blocks": 2097152, 00:40:25.600 "uuid": "59ecc1dd-8085-55c7-bd64-0e311a11b1b5", 00:40:25.600 "assigned_rate_limits": { 00:40:25.600 "rw_ios_per_sec": 0, 00:40:25.600 "rw_mbytes_per_sec": 0, 00:40:25.600 "r_mbytes_per_sec": 0, 00:40:25.600 "w_mbytes_per_sec": 0 00:40:25.600 }, 00:40:25.600 "claimed": false, 00:40:25.600 "zoned": false, 00:40:25.600 "supported_io_types": { 00:40:25.600 "read": true, 00:40:25.600 "write": true, 00:40:25.600 "unmap": false, 00:40:25.600 "flush": false, 00:40:25.600 "reset": true, 00:40:25.600 "nvme_admin": false, 00:40:25.600 "nvme_io": false, 00:40:25.600 "nvme_io_md": false, 00:40:25.600 "write_zeroes": true, 00:40:25.600 "zcopy": false, 00:40:25.600 "get_zone_info": false, 00:40:25.600 "zone_management": false, 00:40:25.600 "zone_append": false, 00:40:25.600 "compare": false, 00:40:25.600 "compare_and_write": false, 00:40:25.600 "abort": false, 00:40:25.600 "seek_hole": false, 00:40:25.600 "seek_data": false, 00:40:25.600 "copy": false, 00:40:25.600 "nvme_iov_md": false 00:40:25.600 }, 00:40:25.600 "memory_domains": [ 00:40:25.600 { 00:40:25.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:25.600 "dma_device_type": 2 00:40:25.600 } 00:40:25.600 ], 00:40:25.600 "driver_specific": { 00:40:25.600 "crypto": { 00:40:25.600 "base_bdev_name": "EE_base0", 00:40:25.600 "name": "crypt0", 00:40:25.600 "key_name": "test_dek_sw" 00:40:25.600 } 00:40:25.600 } 00:40:25.600 } 00:40:25.600 ] 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # return 0 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=2125013 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:40:25.600 02:46:15 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:40:25.600 Running I/O for 5 seconds... 00:40:26.536 02:46:16 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:40:26.536 02:46:16 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:26.536 02:46:16 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:40:26.536 02:46:16 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:26.536 02:46:16 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 2125013 00:40:27.078 Cancelling nested steps due to timeout 00:40:27.081 Sending interrupt signal to process 00:40:30.755 00:40:30.755 Latency(us) 00:40:30.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.755 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:40:30.755 crypt0 : 5.00 28221.09 110.24 0.00 0.00 1128.63 534.26 1795.12 00:40:30.755 =================================================================================================================== 00:40:30.755 Total : 28221.09 110.24 0.00 0.00 1128.63 534.26 1795.12 00:40:30.755 0 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 2124841 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@948 -- # '[' -z 2124841 ']' 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # kill -0 2124841 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # uname 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2124841 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2124841' 00:40:30.755 killing process with pid 2124841 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # kill 2124841 00:40:30.755 Received shutdown signal, test time was about 5.000000 seconds 00:40:30.755 00:40:30.755 Latency(us) 00:40:30.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.755 =================================================================================================================== 00:40:30.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:30.755 02:46:20 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@972 -- # wait 2124841 00:40:31.015 02:46:21 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:40:31.015 00:40:31.015 real 0m6.523s 00:40:31.015 user 0m6.759s 00:40:31.015 sys 0m0.408s 00:40:31.015 02:46:21 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.015 02:46:21 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:40:31.015 ************************************ 00:40:31.015 END TEST bdev_crypto_enomem 00:40:31.015 ************************************ 00:40:31.015 02:46:21 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:40:31.015 02:46:21 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:40:31.015 00:40:31.015 real 0m54.273s 00:40:31.015 user 1m33.391s 00:40:31.015 sys 0m6.900s 00:40:31.015 02:46:21 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.015 02:46:21 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:40:31.015 ************************************ 00:40:31.015 END TEST blockdev_crypto_sw 00:40:31.015 ************************************ 00:40:31.015 02:46:21 -- common/autotest_common.sh@1142 -- # return 0 00:40:31.015 02:46:21 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:40:31.015 02:46:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:31.015 02:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:31.015 02:46:21 -- common/autotest_common.sh@10 -- # set +x 00:40:31.015 ************************************ 00:40:31.015 START TEST blockdev_crypto_qat 00:40:31.015 ************************************ 00:40:31.015 02:46:21 blockdev_crypto_qat -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:40:31.275 * Looking for test storage... 00:40:31.275 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2125781 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 2125781 00:40:31.275 02:46:21 blockdev_crypto_qat -- common/autotest_common.sh@829 -- # '[' -z 2125781 ']' 00:40:31.275 02:46:21 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:40:31.275 02:46:21 blockdev_crypto_qat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:31.275 02:46:21 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:31.275 02:46:21 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:31.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:31.275 02:46:21 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:31.275 02:46:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:40:31.275 [2024-07-11 02:46:21.540897] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:31.275 [2024-07-11 02:46:21.540970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125781 ] 00:40:31.275 [2024-07-11 02:46:21.676352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:31.534 [2024-07-11 02:46:21.725925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.102 02:46:22 blockdev_crypto_qat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:32.102 02:46:22 blockdev_crypto_qat -- common/autotest_common.sh@862 -- # return 0 00:40:32.102 02:46:22 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:40:32.102 02:46:22 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:40:32.102 02:46:22 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:40:32.102 02:46:22 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:32.102 02:46:22 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:40:32.102 [2024-07-11 02:46:22.480252] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:40:32.102 [2024-07-11 02:46:22.488287] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:32.102 [2024-07-11 02:46:22.496306] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:32.361 [2024-07-11 02:46:22.562511] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:40:32.620 Terminated 00:40:32.627 script returned exit code 143 00:40:32.633 [Pipeline] } 00:40:32.655 [Pipeline] // stage 00:40:32.662 [Pipeline] } 00:40:32.684 [Pipeline] // timeout 00:40:32.692 [Pipeline] } 00:40:32.696 Timeout has been exceeded 00:40:32.697 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 4a1d698f-0c8b-48f9-8cde-992b168f6e62 00:40:32.697 Setting overall build result to ABORTED 00:40:32.718 [Pipeline] // catchError 00:40:32.725 [Pipeline] } 00:40:32.744 [Pipeline] // wrap 00:40:32.751 [Pipeline] } 00:40:32.768 [Pipeline] // catchError 00:40:32.778 [Pipeline] stage 00:40:32.780 [Pipeline] { (Epilogue) 00:40:32.795 [Pipeline] catchError 00:40:32.797 [Pipeline] { 00:40:32.812 [Pipeline] echo 00:40:32.813 Cleanup processes 00:40:32.820 [Pipeline] sh 00:40:33.105 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:33.105 1788446 sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:40:33.105 1788458 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:40:33.105 1788500 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:40:33.105 1788501 python3 /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --server 00:40:33.105 1788536 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:40:33.105 1788538 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:40:33.105 1788540 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:40:33.105 1788544 sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:40:33.105 1788576 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:40:33.105 2125745 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:40:33.105 2125760 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:40:33.105 2125761 python3 /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --server 00:40:33.105 2125781 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:40:33.105 2125968 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:33.105 2125973 bash /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656639 00:40:33.127 [Pipeline] sh 00:40:33.412 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:33.412 ++ grep -v 'sudo pgrep' 00:40:33.412 ++ awk '{print $1}' 00:40:33.412 + sudo kill -9 1788446 1788458 1788500 1788501 1788536 1788538 1788540 1788544 1788576 2125745 2125760 2125761 2125781 2125973 00:40:33.423 [Pipeline] sh 00:40:33.705 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:41.838 [Pipeline] sh 00:40:42.115 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:42.373 Artifacts sizes are good 00:40:42.385 [Pipeline] archiveArtifacts 00:40:42.391 Archiving artifacts 00:40:42.566 [Pipeline] sh 00:40:42.849 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:40:42.863 [Pipeline] cleanWs 00:40:42.873 [WS-CLEANUP] Deleting project workspace... 00:40:42.873 [WS-CLEANUP] Deferred wipeout is used... 00:40:42.880 [WS-CLEANUP] done 00:40:42.882 [Pipeline] } 00:40:42.900 [Pipeline] // catchError 00:40:42.911 [Pipeline] echo 00:40:42.912 Tests finished with errors. Please check the logs for more info. 00:40:42.915 [Pipeline] echo 00:40:42.916 Execution node will be rebooted. 00:40:42.939 [Pipeline] build 00:40:42.941 Scheduling project: reset-job 00:40:42.954 [Pipeline] sh 00:40:43.241 + logger -p user.info -t JENKINS-CI 00:40:43.269 [Pipeline] } 00:40:43.296 [Pipeline] // stage 00:40:43.303 [Pipeline] } 00:40:43.315 [Pipeline] // node 00:40:43.319 [Pipeline] End of Pipeline 00:40:43.344 Finished: ABORTED